Multi-agent reinforcement learning method and system based on value decomposition

A multi-agent, reinforcement learning technology, applied in neural learning methods, biological models, indoor games, etc., can solve the problems of positive feedback without rewards, difficult to guarantee convergence time, underestimation of rewards, etc., and achieve rewards The effect of stable value, accelerated exploration efficiency, and improved algorithm performance

Pending Publication Date: 2022-06-24
HOHAI UNIV
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In a large-scale collaborative environment, a problem arises in the process of value function decomposition: the size of the joint state-action space of the value function grows exponentially with the number of agents, which makes value decomposition fast and efficient. becomes more difficult, convergence times are often not guaranteed
[0008] (1) Due to the complexity of the multi-agent environment, the agent needs to spend a lot of time to explore a state that is beneficial to itself or the system in the early stage of exploration. The number of exploration spaces will increase with the increase of the number of agents. In some rewards In a sparse multi-agent scene, it is very likely that the positive feedback of the reward will not be obtained for a long time, the agent cannot effectively perceive the scene information and make correct decisions, and the convergence time is difficult to guarantee
[0009] (2) When the agent performs an action according to the strategy, if the misestimation of several suboptimal joint actions exceeds the better estimate of a single optimal joint action, the reward corresponding to the optimal action will be underestimated, making the agent Choose actions with suboptimal value, which causes the agent's action evaluation to fall into a local optimal cycle and cannot decide the best action, prolonging the time for the agent to make action decisions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-agent reinforcement learning method and system based on value decomposition
  • Multi-agent reinforcement learning method and system based on value decomposition
  • Multi-agent reinforcement learning method and system based on value decomposition

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The present invention will be further described in detail below with reference to the accompanying drawings and specific implementations. It should be noted that the following embodiments are intended to facilitate the understanding of the idea of ​​the present invention, but do not have any limiting effect on it.

[0043] The present invention proposes a multi-agent reinforcement learning method based on value decomposition, which can be applied in complex partially observable scenes, such as video games, sensor networks, robot swarm coordination, and autonomous vehicles. Each agent plays a different role in different multi-agent scenarios, such as acting as a hero in a game scenario, representing each sensor in a sensor network, representing a single robot in a robot collaboration scenario, acting as a car in an autonomous driving scenario, Any object that can perceive the environment and can independently make decisions to affect the environment can be abstracted as a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-agent reinforcement learning method and system based on value decomposition. The method comprises the steps of obtaining a state St of an environment at a current t moment, an action which can be obtained by an initial observation value of each agent and a reward r corresponding to the action; for each agent, calculating a value function Qi (taui) observed by each action based on the local information taui through an evaluation-agent network; utilizing a random-agent network to obtain a reward value function Qi (tau) of each agent based on the global information tau; using the target-agent network to calculate a loss function and update parameters; decomposing a reward value function Qi (tau) of each agent based on the global information tau by using a competition-agent network; and adding decomposition results to obtain a joint reward value function Qtot (tau, a) based on global information tau, and completing training. According to the method, double extraction is carried out on the logical topological relation among multiple agents, and the learning efficiency and the strain capacity of the agents are improved in a complex heterogeneous part observable scene.

Description

technical field [0001] The invention relates to a multi-party cooperative processing technology, in particular to a multi-agent reinforcement learning method and system based on value decomposition. Background technique [0002] Nowadays, multi-agent reinforcement learning (MARL) is a hot topic. MARL has broad prospects in solving many complex real-world problems such as sensor networks, robot swarm coordination, and autonomous vehicles. However, in practical applications, MARL faces Partial observability and stability are the two main challenges. First, when the agent interacts with the environment, the agent cannot observe and make decisions from a global perspective, so it cannot learn the global optimal strategy, and can only observe the information within its own field of vision. Secondly, in a multi-agent environment, agents interact with each other, and each agent makes corresponding actions according to its own local observations, which may affect other agents, beca...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N3/00G06N3/04G06N3/08A63F13/57A63F13/77A63F13/847
CPCG06N3/008G06N3/08A63F13/57A63F13/77A63F13/847G06N3/045
Inventor 谢在鹏邵鹏飞高原张雨锋
Owner HOHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products