Patents
Literature
Patsnap Copilot is an intelligent assistant for R&D personnel, combined with Patent DNA, to facilitate innovative research.
Patsnap Copilot

134 results about "Air combat" patented technology

Reinforcement learning based air combat maneuver decision making method of unmanned aerial vehicle (UAV)

InactiveCN108319286AEnhance autonomous air combat capabilityAvoid tedious and error-proneAttitude controlPosition/course control in three dimensionsJet aeroplaneFuzzy rule
The invention provides a reinforcement learning based air combat maneuver decision making method of a UAV. A motion model of an airplane platform is created; principle factors that influence the air combat situation are analyzed; on the basis of the motion model and analysis on the air combat situation factors, a dynamic fuzzy Q learning model of air combat maneuver decision making is designed, and essential factors and an algorithm flow of reinforcement learning are determined; a state space of air combat maneuver decision making is fuzzified and serves as state input of reinforcement learning; typical air combat motions are selected as basic motions of reinforcement learning, and the triggering intensities of fuzzy rules are summed in a weighted manner, and a continuous motion space is covered; and on the basis of an established air combat dominant function, a return value of reinforcement learning is set in a rewards and punishment values weighing-superposing method. Thus, the autonomous maneuver decision making capability of the UAV during air combat can be improved effectively, the robustness is higher, an autonomous searching optimization performance is higher, and decisionsmade by the UAV are improved continuously in continuous simulation and learning.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Multi-machine collaborative air combat planning method and system based on deep reinforcement learning

ActiveCN112861442ASolve hard-to-converge problemsMake up for the shortcomings of poor exploratoryDesign optimisation/simulationNeural architecturesEngineeringNetwork model
According to the multi-aircraft cooperative air combat planning method and system based on deep reinforcement learning provided by the invention, a combat aircraft is regarded as an intelligent agent, a reinforcement learning agent model is constructed, and a network model is trained through a centralized training-distributed execution architecture, so that the defect that the exploratory performance of a network model is not strong due to low action distinction degree among different entities during multi-aircraft cooperation is overcome; and by embedding expert experience in the reward value, the problem that a large amount of expert experience support is needed in the prior art is solved. Through an experience sharing mechanism, all agents share one set of network parameters and experience playback library, and the problem that the strategy of a single intelligent agent is not only dependent on the feedback of the own strategy and the environment, but also influenced by the behaviors and cooperation relationships of other agents is solved. By increasing the sampling probability of the samples with large absolute values of the advantage values, the samples with extremely large or extremely small reward values can influence the training of the neural network, and the convergence speed of the algorithm is accelerated. The exploration capability of the intelligent agent is improved by adding the strategy entropy.
Owner:NAT UNIV OF DEFENSE TECH

Unmanned aerial vehicle autonomous air combat decision framework and method

The invention discloses an unmanned aerial vehicle autonomous air combat decision framework and method, and belongs to the field of computer simulation. The framework comprises an air combat decisionmodule, a deep network learning module, an enhanced learning module and an air combat simulation environment which are based on domain knowledge. The air combat decision module generates an air combattraining data set and outputs the air combat training data set to the deep network learning module, and a depth network, a Q value fitting function and a motion selection function are obtained through learning and output to the enhanced learning module; the air combat simulation environment uses the learned air combat decision function to carry out a self-air combat process, and records air combat process data to form an enhanced learning training set; the enhanced learning module is used for optimizing and improving the Q value fitting function by utilizing the enhanced learning training set, and an air combat strategy with better performance is obtained. According to the framework, a Q function which is complex in nature can be more accurately and quickly fitted, the learning effect isimproved, the Q function is prevented from being converged to the local optimum value to the largest extent, an air combat decision optimization closed-loop process is constructed, and external intervention is not needed.
Owner:BEIHANG UNIV

Particle swarm optimization method for air combat decision

InactiveCN101908097AGood air combat decisionsAddress inherent shortcomingsBiological neural network modelsSpecial data processing applicationsDecision schemeEmpirical formula
The invention discloses a particle swarm optimization method for an air combat decision, comprising the following steps of: firstly, acquiring the current situation of a battlefield from a command control center; secondly, acquiring a threat factor among aircrafts according to the current situation of the battlefield; thirdly, setting the particle swarm scale and the maximum iterations of the particle swarm; fourthly, initializing all particles of the particle swarm; fifthly, acquiring the threat degree of an enemy party on a first party after weapon attacks of the first part according to an empirical formula; sixthly, constructing a BP (Back Propagation) neural network; seventhly, updating the historical optimal position of the particle swarm and the individual historical optimal position of the particles; eighthly, continuously searching an air combat decision scheme until the maximum iterations of the particle swarm are achieved; and ninthly, determining the historical optimal position coordinate of the particle swarm as the obtained air combat decision. By processing the input and the output of the BP neural network, the decision method can move in a set solution space and has favorable search capability on the optimal solution.
Owner:BEIHANG UNIV

Near-distance air combat automatic decision-making method based on single-step prediction matrix gaming

The invention discloses a near-distance air combat automatic decision-making method based on single-step prediction matrix gaming. The method comprises the following steps: step 1, erecting a six-degree-of-freedom nonlinear unmanned combat air vehicle control law structure; step 2, initializing a matrix gaming chessboard; step 3, according to the gaming chessboard, carrying out single-step prediction calculation; step 4, calculating a payment function matrix; step 5, carrying out strategy selection through a minimax algorithm; step 6, updating a six-degree-of-freedom plane kinetic and dynamic equation; and step 7, determining whether an air combat termination condition is reached. The method has the following advantages: compared to a three-degree-of-freedom particle model, the actual application value is higher. At the same time, a conventional matrix gaming method based on a maneuver library is changed to be based on a maneuver library of an instruction model, what is needed is only prediction of a step length of a single step, the decision-making time is effectively reduced, the requirement for real-time performance of air verification is satisfied, the method can be better adapted to complex dynamic baffle field environment change, and the combat capability of an unmanned combat air vehicle in a near-distance combat is improved.
Owner:BEIHANG UNIV

Multi-unmanned aerial vehicle cooperative air combat maneuver decision-making method based on multi-agent reinforcement learning

The invention discloses a multi-unmanned aerial vehicle cooperative air combat maneuver decision-making method based on multi-agent reinforcement learning. The problem of autonomous decision-making of maneuver actions in multi-unmanned aerial vehicle cooperative air combat in simulation of many-to-many air combat is solved. The method comprises the following steps: creating a motion model of an unmanned aerial vehicle platform; analyzing a state space, an action space and a reward value of a multi-machine air combat maneuvering decision based on multi-machine air combat situation assessment of an attack area, a distance and an angle factor; designing a target distribution method and a strategy coordination mechanism in a collaborative air combat, defining behavior feedback of each unmanned aerial vehicle in target distribution, situation advantages and safe collision avoidance through distribution of reward values, and achieving strategy coordination after training. According to the method, the ability of multiple unmanned aerial vehicles to carry out collaborative air combat maneuvering autonomous decision making can be effectively improved, higher collaboration and autonomous optimization are achieved, and the decision making level of unmanned aerial vehicle formation in continuous simulation and learning is continuously improved.
Owner:NORTHWESTERN POLYTECHNICAL UNIV

Unmanned aerial vehicle cooperative air combat decision-making method based on genetic fuzzy tree

The invention discloses an unmanned aerial vehicle cooperative air combat decision-making method based on a genetic fuzzy tree, and the method comprises the following steps: building an unmanned aerial vehicle cooperative air combat comprehensive advantage evaluation index system which comprises an unmanned aerial vehicle air combat capability evaluation model and an unmanned aerial vehicle air combat situation evaluation model; establishing a target allocation evaluation function, searching an optimal target allocation result by a genetic algorithm, and constructing an unmanned aerial vehiclecooperative air combat target allocation model based on the genetic algorithm; constructing an unmanned aerial vehicle air combat motion model and refining and expanding the basic maneuvering actionlibrary of the unmanned aerial vehicle;and constructing an unmanned aerial vehicle cooperative air combat decision-making model based on the genetic fuzzy tree, performing parameter identification onthe fuzzy tree through the sample data, identifying the fuzzy tree structure through a genetic algorithm, and obtaining the unmanned aerial vehicle cooperative air combat decision-making model which meets the precision requirement and is low in complexity. According to the method, the optimal target allocation result can be obtained in the cooperative air combat of the unmanned aerial vehicle group, and the optimal maneuver of the unmanned aerial vehicle in the single-to-single air combat can be realized.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Air-combat tactic team simulating method based on expert system and tactic-military-strategy fractalization

The invention discloses an air-combat tactic team simulating method based on an expert system and tactic-military-strategy fractalization.The method includes the following steps that simulation modeling is conducted with a knowledge establishing, knowledge expressing and processing mathematical method, and tactics and military strategies are correspondingly converted into data sets respectively and are stored and called in a library or sub-library mode; a fractal military-strategy library is constructed; a tactic index system for evaluating the combat effectiveness of a tactic team is built; an air-combat knowledge self-learning mechanism is built; a military-strategy inferring mechanism is built through an artificial intelligence technology; military-strategy reconstruction is conducted, wherein smooth transition between tactics and between military strategies is achieved.The method is tried and verified in scientific research projects, the blue-party intelligent virtual tactic team is built with the method, the double-party whole-process air-combat simulation confronting test can be achieved in the mode that red-party air combat simulating trained persons only need to conduct operation and commanding on a flight simulation platform of a red party, the hardware scale is reduced, and training difficulty can be improved.
Owner:黄安祥 +4

Large-scale air-to-air combat effectiveness evaluation method based on improved lanchester equation

The invention discloses a large-scale air-to-air combat effectiveness evaluation method based on an improved lanchester equation. The method comprises the following steps: determining the number of initial combat aircrafts on our party and the number of initial combat aircrafts of the enemy at first, and selecting a suitable virtual time step length; calculating the average air combat capability index of the combat aircrafts on our party and the average air combat capability index of the combat aircrafts of the enemy; then calculating the average hit probability; calculating combat achievement and combat damage on our party; and finally evaluating the air-to-air combat effectiveness level according to combat requirements and the combat achievement and the combat damage on our party. The combat achievement and the combat damage on our party at different virtual times can be calculated, and effectiveness of large-scale air-to-air combat is evaluated. The method can be used for evaluating task effectiveness of air-to-air combat in a planning and preparing stage before the war and providing basis for distribution of armed forces, and can further be used for evaluating a combat implementing stage in real time and providing basis for decisions of commanders.
Owner:THE 28TH RES INST OF CHINA ELECTRONICS TECH GROUP CORP

Hesitant fuzzy and dynamic deep reinforcement learning combined unmanned aerial vehicle maneuvering decision-making method

The invention discloses a hesitant fuzzy and dynamic deep reinforcement learning combined unmanned aerial vehicle maneuvering decision-making method, which comprises the steps of firstly, establishingan unmanned aerial vehicle air combat motion model, and establishing a decision-making model based on a weighted optimization target according to attack parameters of friends and my parties and an energy parameter difference based on a situation; secondly, according to a hesitant fuzzy theory, determining an optimal weight of a decision model of an optimization target in real time by adopting a maximum deviation method; then, constructing a state space and an action space of air combat maneuver decision reinforcement learning; then, combining the unmanned aerial vehicle states at multiple moments into a state set as neural network input, and constructing a dynamic deep Q network to perform unmanned aerial vehicle maneuvering decision training; and finally, obtaining an optimal maneuveringdecision through the trained dynamic deep Q network. The hesitant fuzzy and dynamic deep reinforcement learning combined unmanned aerial vehicle maneuvering decision-making method mainly solves the problem of maneuvering decision making of the unmanned aerial vehicle under the condition of incomplete environmental information, considers the influence of the air combat process in the decision making process, and better meets the requirements of actual air combat.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Aircraft soldier system intelligent behavior modeling method based on global situation information

The invention discloses an aircraft soldier system intelligent behavior modeling method based on global situation information, and the method comprises the steps of carrying out the mathematical expression of a global situation based on a state vector through the comprehensive research and judgment of the global battlefield situation in complex air; proposing a situation feature extraction and perception algorithm based on a two-dimensional GIS situation map, obtaining element information which cannot be directly obtained in the state vector, and obtaining a global situation state space for anaircraft soldier intelligent behavior model to perceive; and proposing a reward value generation algorithm based on network connected domain maximization, and driving the aircraft soldier intelligentbehavior model to iteratively evolve to a high return under incomplete global situation excitation. According to the technical scheme, effective theoretical basis and technical assistance can be provided for research of obtaining greater combat information advantages under incomplete battlefield situation awareness, generation of efficient air combat command and control decision-making, analysis,deduction and repeated air combat schemes, improvement of the combat level of an aircraft soldier system and the like.
Owner:BEIHANG UNIV

Air combat maneuvering strategy generation technology based on deep random game

The invention discloses a short-distance air combat maneuvering strategy generation technology based on a deep random game. The technology comprises the following steps: firstly constructing a training environment for combat aircraft game confrontation according to a 1V1 short-distance air combat flow, and setting an enemy maneuvering strategy; secondly, by taking a random game as a standard, constructing agents of both sides of air combat confrontation, and determining a state space, an action space and a reward function of each agent; thirdly, constructing a neural network by using a maximumand minimum value DQN algorithm combining random game and deep reinforcement learning, and training our intelligent agent; and finally, according to the trained neural network, obtaining an optimal maneuvering strategy under an air combat situation through a linear programming method, and performing game confrontation with enemies. The thought of random game and deep reinforcement learning is combined, the maximum and minimum value DQN algorithm is provided to obtain the optimal air combat maneuver strategy, the optimal air combat maneuver strategy can be applied to an existing air combat maneuver guiding system, and effective decisions can be accurately made in real time to guide a fighter plane to occupy a favorable situation position.
Owner:CHENGDU RONGAO TECH

Unmanned aerial vehicle concealing approach method of employing priority random sampling strategy-based Double DQN

InactiveCN110673488AIn line with the actual battlefield environmentOvercome the disadvantages of fittingAdaptive controlSimulationUncrewed vehicle
The invention discloses an unmanned aerial vehicle concealing approach method of employing a priority random sampling strategy-based Double DQN. The unmanned aerial vehicle concealing approach methodcomprises the steps of firstly, establishing a both-side air combat situation diagram of concealing approach and establishing a dominant area and an exposed area in a concealing approach process through the diagram; secondly, establishing state space of an unmanned aerial vehicle and converting the state space into feature space and speed limit-based unmanned aerial vehicle action space; thirdly,building a priority random sampling strategy-based double-depth Q learning network; fourthly, establishing a target potential function reward according to the relative positions of friend or foe bothsides in the dominant area and the exposed area, establishing an obstacle reward according to the distance between the unmanned aerial vehicle and an obstacle, and superposing the target potential function reward and the obstacle reward as the total reward for concealing approach training of a Double DQN neural network; and finally inputting a current feature sequence of the unmanned aerial vehicle into the trained Q target neural network in the Double DQN to obtain an optimal concealing approach strategy of the unmanned aerial vehicle. According to the method disclosed by the invention, the problem of model-free concealing approach of the unmanned aerial vehicle is mainly solved.
Owner:NANJING UNIV OF AERONAUTICS & ASTRONAUTICS

Air combat target threat assessment method based on standardized full-connection residual network

The invention discloses an air combat target threat assessment method based on a standardized full-connection residual network, and belongs to the field of battlefield situation assessment. Firstly, asimulation experiment is carried out to mark data; constructing a training set and a test set and storing the training set and the test set in a CSV file; secondly, constructing a standardized full-connection residual error network under a TensorFlow database, including constructing a graph for reading CSV file data, a residual error network layer and a standardized full-connection residual errornetwork graph, and finally creating a TensorFlow session, training a network model, testing, analyzing network performance and verifying the model. According to the method, the problem of inaccurateevaluation result caused by lack of self-learning reasoning capability for large sample data in other air combat target threat evaluation methods is solved, distribution of input data can be self-learned, rules hidden in the data can be mined, and the trained model can accurately evaluate the air combat target threat. The battlefield situation assessment method is mainly used for (but not limitedto) battlefield situation assessment.
Owner:ZHONGBEI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products