Multi-agent reinforcement learning-based multi-machine air combat decision-making method

A reinforcement learning, multi-agent technology, applied in the field of unmanned aerial vehicles, can solve the problems of difficult to deal with the battlefield situation, large amount of calculation, etc., and achieve the effect of good modularization and rapid transplantation, and good input/output interface.

Pending Publication Date: 2021-12-14
NORTHWESTERN POLYTECHNICAL UNIV
View PDF0 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The method of the invention effectively solves the problem that the traditional multi-agent cooperative air combat has a large amount of calculation, and it is difficult to deal with the rapidly changing battlefield situation that requires real-time settlement

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-agent reinforcement learning-based multi-machine air combat decision-making method
  • Multi-agent reinforcement learning-based multi-machine air combat decision-making method
  • Multi-agent reinforcement learning-based multi-machine air combat decision-making method

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment

[0182] The situation in a two-machine battle is as follows: Figure 5 As shown, the four planes are on the same plane, the red plane 1 and the red plane 2 are in front of the blue plane 1 and the blue plane 2 respectively, and the blue plane 1 and the blue plane 2 have a joint attack close to the red plane 1 and the red plane 2 According to the trend of the area, Red Machine 1 and Red Machine 2 also tend to be close to the joint attack area of ​​Blue Machine 1 and Blue Machine 2. Therefore, red machine 1 and red machine 2 are in balance with blue machine 1 and blue machine 2.

[0183] After the training, after 1000 trials, the numbers of red team victories and blue team victories are shown in Table 1. It can be concluded that the winning rate of the red team is 51.8%, and that of the blue team is 48.2%.

[0184] Table 1 Number of red team victories and blue team victories

[0185] Condition frequency Red machine 1 hits blue machine 1 226 Red machine 1...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-agent reinforcement learning-based multi-machine air combat decision-making method. The method comprises the following steps of firstly, establishing a six-degree-of-freedom model, a missile model, a neural network normalization model, a battlefield environment model and a situation judgment and target distribution model of an unmanned aerial vehicle; then, adopting an MAPPO algorithm as a multi-agent reinforcement learning algorithm, and designing a corresponding return function on the basis of a specific air combat environment; and finally, combining the constructed unmanned aerial vehicle model with the multi-agent reinforcement learning algorithm to generate the final multi-agent reinforcement learning-based multi-machine cooperative air combat decision-making method. The method effectively solves the problems that a traditional multi-agent cooperative air combat is large in calculated amount and difficult to cope with the battlefield situation which needs real-time settlement and varies instantaneously.

Description

technical field [0001] The invention belongs to the technical field of unmanned aerial vehicles, and in particular relates to a multi-aircraft air combat decision-making method. Background technique [0002] The purpose of decision-making for unmanned aerial fighters is to enable them to gain an advantage in battle or turn a disadvantage into an advantage. The key to research is to design an efficient autonomous decision-making mechanism. The autonomous decision-making of unmanned aerial fighters is about how to formulate tactical plans or select flight actions in real time according to the actual combat environment in air combat. The pros and cons of this decision-making mechanism reflect the intelligence level of unmanned aerial fighters in modern air combat. The input of the autonomous decision-making mechanism is various parameters related to air combat, such as the flight parameters of the aircraft, weapon parameters, three-dimensional space scene parameters and the rel...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G05D1/10
CPCG05D1/104
Inventor 刘小雄尹逸苏玉展秦斌韦大正
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products