Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Hierarchical reinforcement learning task graph evolution method based on cause-and-effect diagram

A technology of reinforcement learning and task graph, applied in genetic models, instruments, electrical digital data processing, etc., can solve problems such as falling into local optimum, and achieve the effects of speeding up search, speeding up learning, and improving adaptability

Inactive Publication Date: 2012-06-27
SOUTHEAST UNIV
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] At present, there is mainly the HI-MAT method for the MAXQ automatic layering method, but the task graph obtained by HI-MAT depends on an observed successful path, so as to obtain a task graph consistent with this trajectory in the task graph structure space, so easy stuck in local optimum

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Hierarchical reinforcement learning task graph evolution method based on cause-and-effect diagram
  • Hierarchical reinforcement learning task graph evolution method based on cause-and-effect diagram
  • Hierarchical reinforcement learning task graph evolution method based on cause-and-effect diagram

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The present invention will be described in detail below in conjunction with the accompanying drawings.

[0051] HI-MAT uses DBN on a successful trajectory of existing reinforcement learning tasks to construct the MAXQ task hierarchy, and then uses the constructed task graph on the target task. However, what HI-MAT obtains is a task structure diagram consistent with this trajectory, which is easy to fall into a local optimum. The present invention proposes a task graph evolution method based on causal graphs to construct a task graph more suitable for the target environment. This method mainly adjusts the search direction of the hierarchical space of the task graph according to the causal graph of the target environment, and maintains the causal dependence of the related state variables of the adjusted nodes in the task graph in the causal graph during the operation of the genetic operator. In the process, the adaptability of the task graph is improved, thereby speeding u...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a hierarchical reinforcement learning task graph evolution method based on a cause-and-effect diagram. The method comprises the following steps of: (1) carrying out parameter setting; (2) exploring a cause-and-effect diagram in a target environment; (3) carrying out initialization on a species group N; (4) calculating a fitness value; (5) carrying out genetic operation, wherein the genetic operation comprises selection, intersection and variation; and maintaining the cause-and-effect relationship between nodes in the operation; (6) judging whether to stop; (7) saving K task graphs G1, G2,..., Gk with the highest fitness corresponding to the cause-and-effect diagram; and (8) outputting the task graph G1 with the highest fitness. Compared with the prior art, the automatization and high efficiency of the task graphs constructed by the method of the invention can be suitable for large-scale complex systems and can be applied to the dynamic change situations of the system environment. The method provided by the invention only depends on the changes of the cause-and-effect diagram of the target environment, predicts the task level changes of the target environment when the cause-and-effect diagram of the target environment is changed regularly, and rapidly and efficiently generates an MAXQ task graph.

Description

technical field [0001] The invention relates to a method for optimizing a task graph of hierarchical reinforcement learning by using a computer. technical background [0002] Hierarchical reinforcement learning is an important method to solve the curse of dimensionality problem in reinforcement learning. At present, three typical methods in hierarchical reinforcement learning are Option proposed by Sutton, HAM proposed by Parr, and MAXQ proposed by Dietterich. An important issue in hierarchical reinforcement learning is that the hierarchical task graph needs to be given in advance by the designer based on expert knowledge. Since the manual construction of the hierarchical structure of hierarchical reinforcement learning requires relevant expert knowledge and cannot meet the needs of dynamic unknown environments, how to automatically discover and construct the hierarchical structure of tasks has become a major problem in hierarchical reinforcement learning. At present, many...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F15/18G06N3/12
Inventor 王红兵周建才
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products