Unmanned aerial vehicle network hovering position optimization method based on multi-agent deep reinforcement learning

A reinforcement learning, multi-agent technology, applied in neural learning methods, network planning, biological neural network models, etc., can solve the problems of UAVs trajectory optimization method not applicable to the actual communication environment, ignoring service fairness, centralized control difficulties, etc.

Active Publication Date: 2020-10-16
DALIAN UNIV OF TECH
View PDF12 Cites 86 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In summary, the path planning technology of UAVs in the ground communication network based on UAV base stations mainly has the following defects: (1) It does not consider the dynamics of the environment, that is, the mobility of ground users
(2) It adopts a centralized algorithm, which relies on global information and centralized control. In some large-scale scenarios, it is difficult to perform centralized control. Therefore, a distributed control strategy is required. Each unmanned Mobile base stations make decisions solely on the information they get
(3) Neglecting to consider service fairness at the user level
These shortcomings make the existing UAVs trajectory optimization methods in the UAV network unsuitable for the actual communication environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unmanned aerial vehicle network hovering position optimization method based on multi-agent deep reinforcement learning
  • Unmanned aerial vehicle network hovering position optimization method based on multi-agent deep reinforcement learning
  • Unmanned aerial vehicle network hovering position optimization method based on multi-agent deep reinforcement learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] In order to make the object, technical solution and advantages of the present invention more clear, the present invention will be further described in detail below in conjunction with the examples. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0050] A multi-agent deep reinforcement learning based hovering position optimization method for UAV networks, applied to emergency communication restoration in areas lacking ground infrastructure or after disasters. Such as figure 1 As shown, the area lacks basic communication facilities, and UAVs are used as mobile base stations for communication coverage. The ground environment is dynamically changing, and ground equipment may move. The UAV base station needs to constantly adjust its hovering position to achieve Better communication services (maximize system throughput). At the same time, service fairness and energy co...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an unmanned aerial vehicle network hovering position optimization method based on multi-agent deep reinforcement learning. The method comprises the following steps: firstly, modeling a channel model, a coverage model and an energy loss model in an unmanned aerial vehicle to ground communication scene; modeling a throughput maximization problem of the unmanned aerial vehicleto ground communication network into a locally observable Markov decision process; obtaining local observation information and instantaneous rewards through continuous interaction between the unmanned aerial vehicle and the environment, conducting centralized training based on the information to obtain a distributed strategy network; deploying a strategy network in each unmanned aerial vehicle, so that each unmanned aerial vehicle can obtain a moving direction and a moving distance decision based on local observation information of the unmanned aerial vehicle, adjusts the hovering position, and carries out distributed cooperation. In addition, proportional fair scheduling and unmanned aerial vehicle energy consumption loss information are introduced into an instantaneous reward function,the fairness of the unmanned aerial vehicles for ground user services is guaranteed while the throughput is improved, energy consumption loss is reduced, and the unmanned aerial vehicle cluster can adapt to the dynamic environment.

Description

technical field [0001] The invention relates to the field of wireless communication technology, in particular to a method for optimizing the hovering position of a multi-UAV network based on multi-agent deep reinforcement learning. Background technique [0002] In recent years, due to the high mobility, easy deployment and low cost of UAVs, UAV-based communication technology has attracted extensive attention and has become a new research hotspot in the field of wireless communication. UAV-assisted communication technology mainly has the following application scenarios: UAV as a mobile base station provides communication coverage for areas with scarce infrastructure or post-disaster areas, UAV as a relay node for two Communication nodes provide wireless connectivity, drone-based data distribution and collection. The present invention is mainly aimed at the first scenario, in which the hovering position of the UAV determines the coverage performance and throughput of the enti...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04B7/185H04W16/18H04W16/22G06N3/08
CPCG06N3/08H04B7/18506H04W16/18H04W16/22Y02D30/70
Inventor 刘中豪覃振权卢炳先王雷朱明
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products