Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-unmanned aerial vehicle path collaborative planning method and device based on hierarchical reinforcement learning

A multi-drone, reinforcement learning technology, applied in the field of aircraft

Active Publication Date: 2019-07-09
BEIHANG UNIV
View PDF7 Cites 59 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

It makes it possible to use deep reinforcement learning for multi-UAV collaborative path planning in actual engineering

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-unmanned aerial vehicle path collaborative planning method and device based on hierarchical reinforcement learning
  • Multi-unmanned aerial vehicle path collaborative planning method and device based on hierarchical reinforcement learning
  • Multi-unmanned aerial vehicle path collaborative planning method and device based on hierarchical reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055]In order to make the purpose, technical solution and advantages of the present invention clearer, a clear and complete description will be given below in conjunction with the schematic structural diagram of the device of the present invention and the detailed steps of the algorithm.

[0056] The present invention provides a multi-UAV cooperative path planning method in the air based on layered reinforcement learning. The problem considered is: for a single UAV, the shortest and safest path can be found; Satisfy certain conditional constraints, which are generally set according to the needs of actual tasks, for example: logistics robots keep flying in the same column as much as possible, and deliver a large batch of goods to the same distribution point.

[0057] In order to eliminate the "dimension disaster" problem in the classical reinforcement learning Q-learning method, the neural network is used to store calculation parameters to improve real-time performance, and the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-unmanned aerial vehicle path collaborative planning method and device based on hierarchical reinforcement learning. The method comprises the steps of extracting a characteristic space of each unmanned aerial vehicle in a plurality of unmanned aerial vehicles; layering tasks needing to be executed in task targets according to the task targets of the unmanned aerial vehicles in the plurality of unmanned aerial vehicles, and dividing the tasks into a plurality of subtasks, wherein each subtask is realized by a neural network; forming each neural network composed bythe plurality of subtasks, and initializing parameters of each neural network to obtain each initial neural network; associating each neural network; taking difference between output results and target output as a loss function; carrying out parameter updating on each neural network through gradient descent; finishing training each neural network when a value of the loss function is smaller thana given threshold or the appointed step number arrives; passing each neural network through utilization of characteristic vectors in the respective characteristic space in sequence, thereby obtainingeach output value; selecting an action which enables an operation value to be maximum as a control signal of each unmanned aerial vehicle, and realizing multi-unmanned aerial vehicle path collaborative planning.

Description

technical field [0001] The invention belongs to the technical field of aircraft, and relates to complex behavior control such as multi-aircraft obstacle avoidance, collaborative path planning, and formation control based on layered reinforcement learning, in particular to a multi-UAV path collaborative planning method based on layered reinforcement learning and device Background technique [0002] With the great progress in computing power and artificial intelligence, the tasks that multi-rotor UAVs can perform are becoming more and more difficult, and the types of tasks that can be performed are becoming more and more complex, which brings great changes to people's lives and the flow of social productivity. Convenience and facilitation. Multi-UAV task coordination is a hot and difficult point in the current research on multi-agent control methods, which involves path planning and obstacle avoidance of a single agent, perception and action regulation between multi-agents. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G05D1/10
CPCG05D1/104
Inventor 曹先彬杜文博朱熙郭通李宇萌
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products