Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robot imitation learning method, device, robot and storage medium

A learning method and robot technology, applied in manipulators, program-controlled manipulators, manufacturing tools, etc., can solve problems such as the stability of robot imitation learning and the speed of model training that cannot be guaranteed at the same time, so as to improve the degree of humanization and guarantee The effect of stability

Active Publication Date: 2020-04-07
SHENZHEN INST OF ADVANCED TECH
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to provide a robot imitation learning method, device, robot and storage medium, aiming to solve the problem that the stability, reproduction accuracy and model training speed of the robot imitation learning in the prior art cannot be guaranteed at the same time

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot imitation learning method, device, robot and storage medium
  • Robot imitation learning method, device, robot and storage medium
  • Robot imitation learning method, device, robot and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0027] figure 1 It shows the implementation process of the imitation learning method for the robot provided by Embodiment 1 of the present invention. For the convenience of explanation, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

[0028] In step S101, when a preset movement command is received, the current pose of the end effector is acquired.

[0029] The embodiments of the present invention are applicable to, but not limited to, robots with structures such as joints and connecting rods that can realize actions such as stretching and grasping. When receiving motion or movement commands sent by the user or the control system, the robot can obtain the joint angles of each joint, and then calculate the current pose of the end effector based on these joint angles and forward kinematics. In addition, if the robot itself has There is a position sensor of the end effector, through which the current pose of the end ef...

Embodiment 2

[0045] figure 2 It shows the implementation process of collecting training sample sets and training dynamic prediction models in the imitation learning method for robots provided by Embodiment 2 of the present invention. For the convenience of description, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

[0046] In step S201, during the teaching process, the pose of the end effector is collected on each teaching track of the end effector according to a preset sampling time interval.

[0047] In the embodiment of the present invention, the teaching action can be given by the teaching operator or the user during the teaching process, and the end effector moves according to the teaching action, and the robot itself or the external motion capture device follows the preset sampling time interval. The pose of the end effector is collected on each motion trajectory (teaching trajectory), and the collected pose of the end e...

Embodiment 3

[0067] image 3 The structure of the robot imitation learning device provided by the third embodiment of the present invention is shown. For the convenience of description, only the parts related to the embodiment of the present invention are shown, including:

[0068] The pose acquiring unit 31 is configured to acquire the pose of the end effector at the current moment when a preset motion instruction is received.

[0069] In the embodiment of the present invention, when receiving the motion or movement command sent by the user or the control system, the robot can obtain the joint angles of each joint, and then calculate the current pose of the end effector based on these joint angles and forward kinematics , in addition, if the robot itself has a position sensor of the end effector, the current pose of the end effector can be directly obtained through the position sensor.

[0070] The pose judging unit 32 is used to detect whether the pose at the current moment is the prese...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention is applicable for the field of robots and intelligent control, and provides a robot imitation learning method and device, a robot and a storage medium. The method comprises the steps ofacquiring the pose of an end-effector at the current time when receiving movement instructions, and checking whether the pose at the current time is the target pose; determining that a preset imitation learning task is finished by the end-effector if the pose at the current time is the target pose; or else generating the predicted pose of the end-effector at the next time according to the pose atthe current time and a dynamic prediction model, and adjusting angles of joints according to the predicted pose; and setting the pose after adjustment of the end-effector to the pose at the current time, and skipping to the step of checking whether the pose at the current time is the target pose. The dynamic prediction model is obtained through a limit learning machine model in combination with preset stability constraint condition training, so that the stability, repetition precision and model training speed of robot imitation learning are ensured, and the humanization degree of robot motionis improved effectively.

Description

technical field [0001] The invention belongs to the technical field of robots and intelligent control, and in particular relates to a robot imitation learning method, device, robot and storage medium. Background technique [0002] In the current stage of robot applications, especially in the industrial application of robots, users usually pre-define the movement trajectory of the robot arm, or pre-set a certain task environment, so that the robot arm can repeat execution according to the plan. In this control mode, the manipulator cannot face changes in the task environment or sudden disturbances, or requires heavy manual programming to achieve tasks in complex scenes or difficult tasks. More importantly, the movement trajectory of the manipulator There is no implied human operating habit. Robot imitation learning is an important method to solve these problems. [0003] When modeling robot motion through imitation learning, researchers usually hope to achieve the following...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): B25J9/16
CPCB25J9/161B25J9/1612B25J9/163B25J9/1671
Inventor 欧勇盛王志扬段江哗金少堃徐升熊荣吴新宇
Owner SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products