Unmanned aerial vehicle route planning method based on improved Q-learning algorithm

A route planning and UAV technology, applied in the UAV route planning field based on the improved Q-learning algorithm, can solve the problems of increasing time complexity, dimension disaster, large computing pressure, etc., to reduce the number of exploration steps, speed up The effect of convergence speed

Active Publication Date: 2019-11-22
BEIHANG UNIV
View PDF5 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in the final analysis, reinforcement learning is a data-driven optimization algorithm. Its unavoidable shortcomings are greater computing pressure and the need for more interactive data. The following three problems make it difficult for UAV route planning algorithms based on reinforcement learning to meet practical applications. Requirements:
[0006] 1) In a large-scale state space and action space, the algorithm performs infinite repeated training on each state and action, which will lead to the disaster of dimensionality
[0007] 2) After the UAV performs an action, the reward function value obtained is often not instant, which increases the time complexity
[0009] UAVs must face the above three problems for route planning in an unknown environment. In order to speed up the convergence speed of UAVs using reinforcement learning for route planning, some scholars have added the Dyna learning framework to the Q-learning algorithm. Establish an environment model with a littl

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unmanned aerial vehicle route planning method based on improved Q-learning algorithm
  • Unmanned aerial vehicle route planning method based on improved Q-learning algorithm
  • Unmanned aerial vehicle route planning method based on improved Q-learning algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0064] Applying the Q-learning algorithm to the route planning problem of the UAV, firstly, the flight environment of the UAV is modeled and discretely processed, and the real continuous environment is converted into a discrete environment that can be used by reinforcement learning Q-learning. In order to simulate the environment in which the UAV flies, the environment is modeled in 3D. Design a 100m*100m*20m three-dimensional grid map, where the size of each grid is 1m*1m*1m, and use the three-dimensional grid map as a virtual environment for drones to fly, such as image 3 shown. The degree of discretization of the grid map and the size of the grid have a great influence on the calculation results. Under the premise of the same size global map, the larger the grid size, the state space will be reduced, the calculation cost will be greatly reduced, and the calculation speed will be improved. , but reduces the planning accuracy. If the grid size is set smaller, although the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an unmanned aerial vehicle route planning method based on an improved Q-learning algorithm. The method comprises the steps of: training an unmanned aerial vehicle in differentsimulation environments to obtain a priori knowledge list, and guiding the unmanned aerial vehicle to explore by utilizing the priori knowledge obtained by training in an unknown environment, so thatthe exploring step number of the unmanned aerial vehicle in the unknown environment is reduced; and introducing a single-position action value function convergence criterion, so that the traditional Q-learning based on a Markov process chain convergence principle is changed, and the convergence speed of an action value function is increased.

Description

technical field [0001] The invention relates to the field of UAV route planning, in particular to a UAV route planning method based on an improved Q-learning algorithm in an unknown environment. Background technique [0002] Unmanned aircraft, referred to as UAV, refers to an aircraft that is not controlled by a pilot, and can be navigated and controlled by its onboard equipment during flight, and can also be remotely operated by the ground. Since the UAV does not require a pilot to drive, it can avoid the physiological limitations of the pilot and ensure the safety of the staff. Compared with manned aircraft, drones are small in size, low in cost, high in safety, and good in concealment. Compared with traditional work carriers such as satellites, UAVs have low overall cost, high cost-effectiveness, and are flexible and maneuverable in use. Therefore, all countries are actively expanding the application range of drones. The technical and economic effects of using drones in...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G05D1/10
CPCG05D1/101
Inventor 富立李润夏王玲玲
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products