Unified reinforcement learning method based on curiosity driving

A technology of reinforcement learning and curiosity, applied in the field of reinforcement learning, can solve the problems of affecting the efficiency of learning, inaccurate results, and the inability of internal rewards to fully and effectively guide the agent to explore and learn.

Pending Publication Date: 2020-11-13
ZHEJIANG UNIV
View PDF0 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method does not fundamentally solve three problems: 1. Different curiosity-driven methods focus on different objects, making the estimated internal rewards unable to fully and effectively guide the agent to explore and learn; 2. Due to the state The space is very large, and there is a lot of background information irrelevant to the learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Unified reinforcement learning method based on curiosity driving
  • Unified reinforcement learning method based on curiosity driving
  • Unified reinforcement learning method based on curiosity driving

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0083] The implementation method of this embodiment is as described above, and the specific steps will not be described in detail. The following only shows the effect of the case data.

[0084] First use the attention module to obtain a reliable feature representation of the state, and then use state novelty estimation and forward dynamic prediction to estimate the degree of exploration of the state and state-action pairs, that is, the internal reward of the initial estimate. On this basis, the estimated internal rewards are smoothed using multiple samples in the state space, and different types of internal rewards are fused to obtain more accurate and robust internal rewards. Finally, the agent learns the policy using the empirical data generated by interacting with the environment and the estimated internal rewards. The result is asfigure 1 , 2 , 3 shown.

[0085] figure 1 The result of visualizing the features extracted by the attention module of the present invention on...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a unified curiosity driving-based reinforcement learning method, which is used for enabling an intelligent agent to quickly and effectively learn a strategy under the conditionof sparse awards. The method specifically comprises the following steps of 1) obtaining feature expression with a reliable state through an attention module; 2) estimating the state and the exploration degree of a state action pair by using state novelty estimation and forward dynamic prediction, i.e., preliminarily estimated internal rewards; 3) smoothing the estimated internal rewards by usinga plurality of samples in the state space; 4) fusing different types of internal rewards to obtain more accurate and robust internal rewards; and 5) the intelligent agent learning the strategy by using experience data generated by interaction with the environment and the estimated internal rewards. The method is suitable for a sparse reward problem in the field of reinforcement learning, and the strategy can be learned quickly and effectively under the condition that the external rewards are sparse or do not exist.

Description

technical field [0001] The invention belongs to the field of reinforcement learning and is a branch of the field of machine learning, in particular to a unified curiosity-driven reinforcement learning method. Background technique [0002] The reward function is an important factor in the reinforcement learning process, and the agent learns the policy by maximizing the cumulative reward. However, in many scenarios, rewards are often sparse. For example, in a Go game, only the result of winning or losing can be received at the end, and many actions in the middle cannot be rewarded in time, which brings great challenges to reinforcement learning. . The traditional method generally combines specific tasks and manually designs the corresponding reward function, but this method requires high knowledge in the professional field, and requires cumbersome debugging, and it is difficult to transfer between different tasks. [0003] Existing curiosity-driven methods mainly estimate th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06N20/00G06K9/62
CPCG06N20/00G06F18/25
Inventor 李玺皇福献崔家宝李伟超
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products