Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Quadrotor UAV Route Following Control Method Based on Deep Reinforcement Learning

A four-rotor unmanned aerial vehicle and reinforcement learning technology, applied in three-dimensional position/course control, vehicle position/route/height control, attitude control, etc., can solve the problem of unstable learning process, low control accuracy, and inability to achieve continuous control, etc. question

Active Publication Date: 2020-10-27
NORTHWESTERN POLYTECHNICAL UNIV
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to overcome the deficiencies in the prior art, the present invention proposes a four-rotor UAV route following control method based on deep reinforcement learning. The method first establishes the Markov model of the quadrotor UAV route following deep reinforcement learning algorithm , and then use the Deep Deterministic Policy Gradient (DDPG) algorithm for deep reinforcement learning to overcome the problems of low control accuracy, inability to achieve continuous control and unstable learning process in previous methods based on reinforcement learning, and realize high-precision quadrotor drones course following control

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Quadrotor UAV Route Following Control Method Based on Deep Reinforcement Learning
  • A Quadrotor UAV Route Following Control Method Based on Deep Reinforcement Learning
  • A Quadrotor UAV Route Following Control Method Based on Deep Reinforcement Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0129] The use case of this embodiment realizes the autonomous flight of the quad-rotor UAV following random route. Set the UAV mass m = 0.62 kg, and the gravity acceleration g = 9.81 m / s 2 . Set the drone to hover initially, and fly from the starting coordinates (0,0,0) to perform the mission. When the drone completes the target route and reaches the end of the route, the system automatically refreshes the new target route, and the drone performs the route following task flight diagram as shown figure 2 Shown.

[0130] The initial φ, θ, and ψ are all 0°, which is derived from the identification of the drone sensor. To facilitate neural network processing, when the roll angle, pitch angle, and yaw angle are input into the state, they are cosineized. Set the single-step movement time of the UAV Δt = 0.05 seconds, the thrust coefficient of the quadrotor UAV c T = 0.00003, the arm length d = 0.23 meters.

[0131] Solve the position r of the drone in the inertial coordinate system ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a four-rotor unmanned aerial vehicle route following control method based on deep reinforcement learning. The method comprises the following steps of firstly, establishing a Markov model of a four-rotor unmanned aerial vehicle route following deep reinforcement learning algorithm, and then performing deep reinforcement learning by adopting a deep deterministic policy gradient (DDPG) algorithm. The problems of relatively low control precision, continuous control incapability, unstable learning process and the like in a conventional method based on reinforcement learning are solved, and high-precision four-rotor unmanned aerial vehicle route following control is achieved. According to the method, the reinforcement learning is combined with a deep neural network, so that the learning ability and the generalization ability of the model are improved, and the complexity and carelessness of manually operating flight of an unmanned aerial vehicle in the uncertainty environment are avoided, so that completion of a route following task by the unmanned aerial vehicle is safer and more efficient; and meanwhile, the method has good application prospects in scenes of unmanned aerial vehicle target tracking, autonomous obstacle avoidance and the like.

Description

Technical field [0001] The invention belongs to the field of intelligent control, and in particular relates to a method for controlling drone route following. Background technique [0002] In recent years, as the four-rotor UAV has shined in many fields such as industrial inspection, rescue and relief, and life assistance, it has gradually become a new frontier and hot spot in military aviation academic research. For drones to complete high-altitude route following, target tracking and other mission scenarios where humans cannot reach the scene, ensuring the autonomy and controllability of drone flight is the most basic and important functional requirement to achieve various complex operations The premise of the task. For many reasons, autonomous decision-making and control of UAVs still face huge challenges in the field of intelligent control. First, the UAV flight control has a variety of input and output, and its kinematics and dynamics models are complex, with high nonlinea...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G05D1/08G05D1/10
CPCG05D1/0088G05D1/0825G05D1/101
Inventor 李波杨志鹏万开方高晓光甘志刚梁诗阳越凯强
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products