Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robot control method based on offline model pre-training learning DDPG algorithm

An off-line model and control method technology, applied in the field of robot control of DDPG algorithm, can solve the problems of reducing the workload of DDPG online learning in the early stage, a large number of trial and error actions, etc.

Active Publication Date: 2021-04-16
ZHONGYUAN ENGINEERING COLLEGE
View PDF9 Cites 32 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0008] Aiming at the technical problems that the existing control method using the DDPG algorithm will fall into a local minimum during the online training process and a large number of trial-and-error actions and invalid data will be generated when the DDPG network is initially trained, the present invention proposes a pre-training learning method based on an offline model. The robot control method of the DDPG algorithm starts with a large amount of existing offline data, uses the existing data to train the object state model and value reward model offline, and imitates the online training process in advance to pre-train the action network and value network in DDPG. , reduce the initial workload of DDPG online learning and improve the quality of online learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot control method based on offline model pre-training learning DDPG algorithm
  • Robot control method based on offline model pre-training learning DDPG algorithm
  • Robot control method based on offline model pre-training learning DDPG algorithm

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0068] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0069] Such as figure 1 As shown, a robot control method based on the DDPG algorithm of offline model pre-training learning, the steps are as follows:

[0070] Step 1: Collect the training data of the 2D dummy in the offline environment, and preprocess the training data to obtain the training data set.

[0071] The experimental environment is windows 10+paddle 1.7+par l1.3.1+cuda10.0. The hardware is core i8-8300+ graphics card GTX1060, and the simulation pl...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a robot control method based on an off-line model pre-training learning DDPG algorithm, and the method comprises the following steps: collecting the training data of a 2D dummy in an off-line environment, and carrying out the preprocessing of the training data to obtain a training data set; constructing and initializing an artificial neural network and initializing parameters; pre-training the evaluation network and the action network offline by using the training data set; initializing a target network by using the pre-trained evaluation network, and storing state conversion data into a storage buffer by the intelligent agent to serve as an online data set for training an online network; training an online strategy network and an online Q network by using the online data set, and updating the online strategy network and the online Q network by using a DDQN structure; carrying out soft updating, and controlling the state of the 2D dummy. According to the method, the efficiency is higher, the generated Q value is more accurate, the average reward value is higher, the learning strategy is more stable and reliable, the convergence rate is increased, the obtained accumulated reward value reaches a higher level, and the robot can quickly arrive at the destination.

Description

technical field [0001] The present invention relates to the technical field of robot control, in particular to a robot control method based on the DDPG algorithm of off-line model pre-training learning. Background technique [0002] Reinforcement learning is an important branch of machine learning in which an agent learns to behave in an environment by performing certain actions and observing the rewards or outcomes obtained from those actions. It mainly contains four elements: agent, environment state, action and reward. The goal of reinforcement learning is that the agent performs actions in the positive direction as much as possible according to the positive feedback of the environment to learn a good strategy and obtain the most cumulative rewards. [0003] At present, deep reinforcement learning has had an important impact on the simulation control, motion control, indoor and outdoor navigation, and simultaneous positioning of robots, enabling robots to automatically l...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F30/27G06N3/04G06N3/08G06F111/08G06F111/10
Inventor 张茜王洪格姚中原戚续博
Owner ZHONGYUAN ENGINEERING COLLEGE
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products