Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Power optimization method for autonomous navigation UAV based on deep reinforcement learning

A technology of reinforcement learning and autonomous navigation, applied to mechanical equipment, combustion engines, internal combustion piston engines, etc. Consumption performance and other issues, to achieve the effect of improving computing power utilization efficiency, increasing battery life, and improving power consumption utilization

Active Publication Date: 2022-05-17
SUN YAT SEN UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, a large amount of data (such as image and video data) is transmitted to a remote cloud server through a long wide area network, resulting in a large end-to-end delay and energy consumption on the drone, and due to the mobility of the drone, its Performance is greatly affected by bandwidth fluctuations and cannot provide a stable performance
The second is to deploy the neural network model directly on the UAV's local computing device to achieve high reliability and low-latency reasoning. However, because the deep learning model usually requires large computing and storage overhead, it cannot provide a good power consumption performance.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Power optimization method for autonomous navigation UAV based on deep reinforcement learning
  • Power optimization method for autonomous navigation UAV based on deep reinforcement learning
  • Power optimization method for autonomous navigation UAV based on deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The accompanying drawings are for illustrative purposes only, and should not be construed as limiting the present invention; in order to better illustrate this embodiment, certain components in the accompanying drawings will be omitted, enlarged or reduced, and do not represent the size of the actual product; for those skilled in the art It is understandable that some well-known structures and descriptions thereof may be omitted in the drawings. The positional relationship described in the drawings is for illustrative purposes only, and should not be construed as limiting the present invention.

[0041] This embodiment discloses a power optimization method for self-driving drones based on deep reinforcement learning. The method realizes autonomous navigation through a deep neural network, combines reinforcement learning, and infers power from the environment state of the drone. The optimal configuration improves the endurance of the drone. Specifically include the foll...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a power optimization method based on deep reinforcement learning in the application of unmanned aerial vehicles. By combining the characteristics of the environment state of the unmanned aerial vehicle, the calculation scale of the convolutional neural network is dynamically configured to achieve low delay and high efficiency. Energy Efficient Autonomous Navigation Task Execution. The invention first designs and trains a deep neural network capable of receiving input layers of different sizes, and calculates the control direction and speed of the drone according to the image input from the front camera; Obstacle confounding factors and historical action vectors are used to infer the optimal neural network configuration for computing power consumption adapted to the current environment, so as to improve the utilization rate of computing energy consumption of UAV equipment and prolong the battery life of autonomous navigation UAVs.

Description

technical field [0001] The present invention relates to the technical fields of edge computing, deep learning, reinforcement learning and automatic driving, and more specifically, relates to a power optimization method for autonomous navigation UAV based on deep reinforcement learning. Background technique [0002] In recent years, the autonomous navigation capability of drones has attracted widespread attention from the robotics community. The advantages of autonomous navigation drones, such as easy deployment, agility and mobility, have been widely used in many fields, such as fire detection , precision agriculture, express delivery and security inspections, etc. The traditional way to realize self-navigation is to use SLAM algorithm, which includes two processes of perception of a given map and calculation of control commands. However, separating the perception process from the control process not only hinders the positive feedback between the perception process and the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G05D1/10
CPCG05D1/101Y02T10/40
Inventor 陈旭林椿珉周知
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products