Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Direct Inverse Reinforcement Learning Using Density Ratio Estimation

A technology of reinforcement learning and density ratio, applied in the field of reverse reinforcement learning system, can solve problems such as complex integral evaluation and inability to solve continuous problems

Active Publication Date: 2022-05-06
OKINAWA INST OF SCI & TECH SCHOOL
View PDF7 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Furthermore, the methods proposed in NPL 6 cannot solve continuous problems in practice because their algorithms involve complex integral evaluations of

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Direct Inverse Reinforcement Learning Using Density Ratio Estimation
  • Direct Inverse Reinforcement Learning Using Density Ratio Estimation
  • Direct Inverse Reinforcement Learning Using Density Ratio Estimation

Examples

Experimental program
Comparison scheme
Effect test

preparation example Construction

[0223]

[0224] Consider two sampling methods. One approach is uniform sampling, and the other is trajectory-based sampling. In uniform sampling methods, x is sampled from a uniform distribution defined over the entire state space. In other words, p(x) and π(x) are considered to be uniformly distributed. Then, sample y from the uncontrolled and controlled probabilities to construct D respectively p and D π . In trajectory-based sampling methods, p(y|x) and π(y|x) are used to start from the same state x 0 A trace of the generated state. Then, a pair of state transitions is randomly selected from the trajectory to construct D p and D π . p(x) is expected to be different from π(x).

[0225] For each cost function, the corresponding value function is calculated by solving Equation (4), and the corresponding optimal governed probability is evaluated by Equation (5). In previous methods (Todorov, 2009b, NPL 25), exp(-V(x)) is represented by a linear model, but this is di...

Embodiment approach 2

[0294] Next, Embodiment 2 having characteristics superior to Embodiment 1 in some respects will be described. Figure 12 Differences between Embodiment 1 and Embodiment 2 are schematically shown. As above, and in Figure 12 As shown in (a), Embodiment 1 uses the density ratio estimation algorithm twice and the regularized least squares method. In contrast, in Embodiment 2 of the present invention, the logarithm of the density ratio π(x) / b(x) is estimated using the standard density ratio estimation (DRE) algorithm, and the density ratio π(x) is estimated by using the Bellman equation ,y) / b(x,y) to calculate r(x) and V(x) as reward and value functions, respectively. In more detail, in Embodiment 1, the following three steps are required: (1) estimate π(x) / b(x) through the standard DRE algorithm; (2) estimate π(x,y) / b through the standard DRE algorithm (x,y), and (3) calculate r(x) and V(x) by regularized least squares using the Bellman equation. In contrast, Embodiment 2 use...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

A method for estimating the reverse reinforcement learning of the reward and value function of the behavior of an object, the method comprising: obtaining data representing changes in a state variable, which defines the behavior of the object; will be given by formula (1) The modified Bellman equation is applied to the acquired data, where r(x) and V(x) represent the reward function and value function in state x, respectively, while γ represents the discount factor, and b(y|x) and π(y |x) respectively represent the state transition probability before and after learning; estimate the logarithm of the density ratio π(x) / b(x) in formula (2); according to the estimated density ratio π(x,y) / b( x, y), estimate r(x) and V(x) in Equation 2; and output the estimated r(x) and V(x).

Description

technical field [0001] The present invention relates to inverse reinforcement learning, and more particularly, to systems and methods for inverse reinforcement learning. This application claims the benefit of US Provisional Application No. 62 / 308,722, filed March 15, 2016, and is hereby incorporated by reference. Background technique [0002] Understanding human behavior from observation is critical to developing artificial systems that can interact with people. Since our decision process is influenced by the rewards / costs associated with the chosen actions, the problem can be formulated as estimating rewards / costs from observed behavior. [0003] The idea of ​​inverse reinforcement learning was originally proposed by Ng and Russel (2000) (NPL 14). The OptV algorithm proposed by Dvijotham and Todorov (2010) (NPL 6) is a prior work that shows that the demonstrator's policy is approximated by a value function, which is the solution of the linearized Bellman equation. [000...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06N3/08G06N20/00
CPCG06N20/00G06N7/01
Inventor 内部英治铜谷贤治
Owner OKINAWA INST OF SCI & TECH SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products