Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Mixed-experience multi-agent reinforcement learning motion planning method

A multi-agent, motion planning technology, applied in the field of deep learning, can solve problems such as difficulty in adapting to dynamic and complex environments, poor training stability, not caring about the environment, etc., to achieve the effect of accelerating training speed, improving training skills, and reducing update frequency

Active Publication Date: 2021-09-03
NORTHWESTERN POLYTECHNICAL UNIV
View PDF8 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The artificial potential field method has a simple and effective obstacle avoidance planning strategy, but there are problems of local minima and planning that are difficult to apply to dynamic and complex environments; the MADDPG algorithm does not care about the complexity of the environment and has the characteristics of autonomous learning, but it has convergence difficulties and training problems. The problem of poor stability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mixed-experience multi-agent reinforcement learning motion planning method
  • Mixed-experience multi-agent reinforcement learning motion planning method
  • Mixed-experience multi-agent reinforcement learning motion planning method

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0178] 1. Establish a stochastic game model for multi-agent motion planning in complex environments.

[0179] This embodiment belongs to the problem of multi-agent reinforcement learning, and uses stochastic countermeasures as the environment model.

[0180] 1.1. Set the physical model of the agent and the obstacle. The schematic diagram of the model is as follows figure 2 shown.

[0181] The intelligent body is set as a round smart car, and the number is n, and n=5 is set in this embodiment. In the present invention, it is assumed that the physical models of all agents are the same, and for agent i, its radius is set to r i a =0.5m, the velocity is u i =1.0m / s, velocity angle ψ i Indicates the angle between the speed and the positive direction of the X-axis, and the range is (-π, π]. The target of agent i is set to the radius r i g = 1.0m circular area, the location is The distance from agent i is D(P i a ,P i g ). When D(P i a ,P i g )≤r i a + r i g , ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a mixed-experience multi-agent reinforcement learning motion planning method, namely an ME-MADDPG algorithm. According to the method, through MADDPG algorithm training, when a sample is generated, experience is generated through exploration and learning, high-quality experience of successfully planning multiple unmanned aerial vehicles to a target through an artificial potential field method is added, and the two kinds of experience are stored in different experience pools. During training, a neural network collects samples from the two experience pools through dynamic sampling according to the changing probability, state information and environment information of the agents serve as input of the neural network, and the speeds of the multiple agents serve as output. Meanwhile, the neural network is slowly updated in the training process, the training of a multi-agent motion planning strategy is stably completed, and finally, the agents autonomously avoid obstacles in a complex environment and can smoothly reach the respective target positions. According to the method, a motion planning strategy with better stability and adaptability can be efficiently trained in a complex dynamic environment.

Description

technical field [0001] The invention belongs to the technical field of deep learning, and in particular relates to a multi-agent reinforcement learning motion planning method. Background technique [0002] With the vigorous development of scientific theory and science and technology, multi-agent systems are more and more widely used in people's daily production and life. Driving, etc. In these fields, it is necessary to use multi-agent motion planning technology. The multi-agent motion planning problem is a kind of problem of finding the optimal path set of multiple agents from the starting position to the target position without conflicts. How to make the agent avoid obstacles and other agents efficiently and reach the designated area has become a major research problem. [0003] The motion planning methods currently proposed by researchers can be generally divided into global path planning and local path planning. Although global path planning can efficiently and quickl...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G05D1/02
CPCG05D1/0214G05D1/0221G05D1/0223
Inventor 万开方武鼎威高晓光
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products