Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Continuous reinforcement learning system and method based on stochastic differential equation

A stochastic differential equation and reinforcement learning technology, applied in the field of reinforcement learning for continuous systems, can solve problems such as failure to satisfy continuity conditions and uncontrollable variance

Active Publication Date: 2019-11-26
SHANGHAI UNIV
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] However, most of the current continuous reinforcement learning methods have their shortcomings in theory. For example, although the noise introduced by DDPG can guarantee the continuity of the action, it cannot control the variance; and A3C under the Gaussian strategy, although it can control the variance, it cannot control the variance. Does not satisfy the continuity condition in theory

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Continuous reinforcement learning system and method based on stochastic differential equation
  • Continuous reinforcement learning system and method based on stochastic differential equation
  • Continuous reinforcement learning system and method based on stochastic differential equation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0045] In order to make the purpose, technical solution and advantages of the present invention clearer, a continuous reinforcement learning system and method based on stochastic differential equations of the present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0046] The invention proposes a continuous reinforcement learning system and method based on stochastic differential equations, which are suitable for continuous control applications.

[0047] like figure 1 As shown, the step process of a continuous reinforcement learning method based on stochastic differential equations proposed by the present invention includes the following steps:

[0048] Step 1, initialize all parameters in the action policy generator APG, environment state estimator ESE, value estimator VE, memory storage module MS and external environment EE included in the whole learning method.

[0049] The present invention takes the Pendulum-v0 in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a continuous reinforcement learning system and method based on a stochastic differential equation. The system comprises an action strategy generator APG, an environment state estimator ESE, a value estimator VE, a memory storage module MS and an external environment EE. The method comprises the following specific steps: an action strategy generator APG, an environment stateestimator ESE and a value estimator VE are initialized; an action strategy generator APG calculates an output action value increment delta ak; the external environment EE outputs a next action valueak + 1, a next environment state value sk + 1 and a current reward value Rk and stores the values in a memory storage module MS; the environment state estimator ESE updates the environment state parameter set theta p and predicts a future environment state estimated value s'k; the VE optimizer updates the Q function network and predicts a future reward estimation value R'k; and the APG optimizer updates the action value parameter set theta v. The method is based on a stochastic differential equation as a basic model, continuity of action control can be achieved, variance of the training process can be controlled, and actions can be selected by predicting changes of the environment so as to achieve better environment interaction.

Description

technical field [0001] The invention relates to the fields of reinforcement learning and stochastic process, and in particular relates to a reinforcement learning method for continuous systems. Background technique [0002] Deep reinforcement learning is an end-to-end learning system that combines the perception ability of deep learning with the decision-making ability of reinforcement learning, has strong versatility, and realizes direct control from raw input to output. Reinforcement learning has become a very important unsupervised learning method, which enables the agent to judge the current environment state through the value function in the interaction with the environment, and thus make corresponding actions to obtain better rewards. At present, reinforcement learning algorithms mainly focus on discrete action policy sets, while classic continuous reinforcement learning methods such as DDPG and A3C can be used for continuous motion control in applications such as robo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/13G06K9/62
CPCG06F17/13G06F18/295G06F18/214
Inventor 贾文川程丽梅陈添豪孙翊马书根
Owner SHANGHAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products