Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A UAV Path Planning Method Based on Transfer Learning Strategy Deep Q-Network

A technology of transfer learning and path planning, applied in two-dimensional position/channel control, vehicle position/route/altitude control, instruments and other directions, which can solve the problems of low success rate and slow convergence speed.

Active Publication Date: 2022-01-11
NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a UAV path planning method combining transfer learning and DQN algorithm, which can solve the problems of slow convergence speed and low success rate when DQN algorithm performs path planning in a dynamic environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A UAV Path Planning Method Based on Transfer Learning Strategy Deep Q-Network
  • A UAV Path Planning Method Based on Transfer Learning Strategy Deep Q-Network
  • A UAV Path Planning Method Based on Transfer Learning Strategy Deep Q-Network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The technical solution of the present invention is described in detail in conjunction with the accompanying drawings.

[0027] A kind of unmanned aerial vehicle path planning method based on transfer learning strategy depth Q network of the present invention, specifically comprises the following steps:

[0028] Step 1, use the grid method to model and describe the dynamic environment in which the UAV lives.

[0029] (1.1) The dynamic environment where the UAV is located is a 20x20 grid map, such as figure 2 shown. Among them, light pink squares are movable obstacles; other black positions are immovable obstacles, which are L-shaped walls, horizontal walls, vertical walls, T-line walls, inclined walls, square walls and irregular walls. Test the obstacle avoidance effect of the agent; the yellow circle is the target position, and the red square is the starting position of the agent. The target position and the starting position of the agent can be randomly generated. W...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention discloses a UAV path planning method based on a transfer learning strategy deep Q network. The method first uses the grid method to model and describe the dynamic environment in which the UAV is located, and establishes the state space of the UAV. and the action space model; secondly, initialize the network parameters of DQN and the current state of the UAV; then, under the static environment model, use the reward mechanism based on the social force model to train the DQN to obtain the network weight and optimal action value; Then, use transfer learning to transfer the network weights and optimal action values ​​trained in the static environment to the dynamic environment, continue the neural network training, and get the actions to be executed by the UAV; finally, calculate the position of the UAV at the current moment, Realize the path planning of UAV in dynamic environment. The invention effectively solves the problems of slow DQN training convergence speed, unsatisfactory path planning and low success rate when the UAV performs path planning in a dynamic environment.

Description

technical field [0001] The invention belongs to the field of unmanned aerial vehicle path planning, in particular to a method for unmanned aerial vehicle path planning based on transfer learning and DQN (Deep Q-Network), applying transfer learning and deep reinforcement learning for unmanned aerial vehicle path planning in a dynamic environment . [0002] technical background [0003] UAV path planning is the core issue in the field of UAV technology research, and related algorithms are developing rapidly. Traditional methods include: Dijkstra shortest path search method (greedy algorithm), A* algorithm, ant colony optimization algorithm, reinforcement learning algorithm, etc. The core idea of ​​Dijkstra's algorithm is that the next vertex selected for each exploration is the point with the closest Euclidean distance from the starting point until the target is found. This method is only suitable for known overall information and static maps, and the efficiency is low; the A...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G05D1/02
CPCG05D1/0223G05D1/0214G05D1/0221
Inventor 丁勇汪常建胡佩瑶
Owner NANJING UNIV OF AERONAUTICS & ASTRONAUTICS
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products