Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Vehicle Control Method Based on Reinforcement Learning Control Strategy in Mixed Fleet

A control strategy and vehicle control technology, applied in vehicle position/route/height control, control/regulation system, non-electric variable control, etc., can solve the problems of easy deviation of results and strong dependence on human factors, and reduce calculation cost, improvement of formation deviation phenomenon, and improvement of stability

Active Publication Date: 2021-07-16
YANSHAN UNIV
View PDF10 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The system includes a state monitoring module, a simulated driving module, an analysis module, a comparison module, etc. By analyzing the driving operation defects, it points out the driver's operation errors. The dependence on human factors is too strong, and the results are prone to deviation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Vehicle Control Method Based on Reinforcement Learning Control Strategy in Mixed Fleet
  • Vehicle Control Method Based on Reinforcement Learning Control Strategy in Mixed Fleet
  • Vehicle Control Method Based on Reinforcement Learning Control Strategy in Mixed Fleet

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0066] Such as figure 1 As shown, in general, the problem of knowing the state transition probability is called "model-based" problem, and the problem of not knowing is called "model-free" problem. The Markov decision process in the prior art is a modeling method proposed for the "no model" problem. The reinforcement learning algorithm of mixed traffic that the present invention proposes is a kind of model-free free control strategy, and this method forms a database with the driving data of vehicles in the mixed fleet, such as speed, acceleration, and driving distance, and combines this database with the traffic on the road. The situation is used as the environment, and each vehicle in the formation is regarded as an agent, and the environment can realize the feedback status and rewards to the agent. The input is the defined environment state, vehicle state, and optimal control action, and the output is the reward value caused by the action in this state. It can be applied t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a vehicle control method based on a reinforcement learning control strategy in a mixed fleet, which includes: initializing the mixed fleet, establishing a fixed reference frame and an inertial reference frame; establishing a model of the longitudinal queue of mixed vehicles in the inertial reference frame; constructing a Lagrang Daily quadratic queue-following cost function, and get the expression of the Q value function; for the information obtained from the influence of surrounding vehicles on the self-vehicle, first use the deep Q learning network for training; then use the DDPG algorithm for parameter training, If the two processes of Q-value function and control input achieve convergence at the same time, the solution to the current optimal control strategy is completed; the optimal control strategy is input into the model of the mixed vehicle longitudinal queue, and the mixed vehicle fleet updates its own state; the cycle is repeated, and finally completed Control tasks for vehicles in a mixed fleet. The system of the invention solves the problem of the autonomous training of the mixed fleet.

Description

technical field [0001] The invention belongs to the technical field of intelligent traffic control, and in particular relates to a vehicle control method based on a reinforcement learning control strategy in a mixed fleet. Background technique [0002] With the rapid development of artificial intelligence technology, unmanned driving technology has become more mature, and the mixed longitudinal car-following queue composed of manned and unmanned vehicles has become a hot research direction in the field of intelligent transportation. Among them, the longitudinal platoon car-following problem combines traditional dynamics and kinematics methods to study the influence of the driving state of the vehicle in front of the platoon on the car-following vehicle. However, due to the randomness of the positions of manned and unmanned vehicles in practical mixed longitudinal platoons, and the need for drivers' behavior to be identified in advance as part of the platooning system, there ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G05D1/02
CPCG05D1/0221G05D1/0223G05D1/0253G05D1/0276G05D1/0295G05D2201/0212
Inventor 罗小元刘劭玲李孟杰郑心泉刘乐
Owner YANSHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products