Supercharge Your Innovation With Domain-Expert AI Agents!

Vision acquisition method, learning system and storage medium based on target attitude deep learning

A deep learning and target attitude technology, applied in the field of robotics, can solve the problems of large amount of calculation, low action reduction, low intelligence, etc., to achieve high imitation reduction, reduced demand, and high intelligence. Effect

Active Publication Date: 2021-08-24
SHENZHEN YUEJIANG TECH CO LTD
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to provide a visual acquisition method, a learning system and a storage medium based on target posture deep learning, aiming to solve the problem of low degree of motion reduction, large amount of calculation for taking points, and manual labor when robots in the prior art simulate human teaching actions. The problem of more participation and less intelligence

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Vision acquisition method, learning system and storage medium based on target attitude deep learning
  • Vision acquisition method, learning system and storage medium based on target attitude deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0011] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0012] In describing the present invention, it should be understood that the terms "length", "width", "upper", "lower", "front", "rear", "left", "right", "vertical", The orientation or positional relationship indicated by "horizontal", "top", "bottom", "inner", "outer", etc. are based on the orientation or positional relationship shown in the drawings, and are only for the convenience of describing the present invention and simplifying the description, rather than Nothing indicating or implying that a referenced device or element must have a particular orientation, be constructed, and operate in a ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to the technical field of robots, and discloses a visual collection method, a learning system and a storage medium based on target posture deep learning, which are used to control robots to learn and teach actions, wherein the deep learning visual collection method based on target posture includes the following steps: Collect the teaching image information of the teaching action process in the direction; analyze the teaching image information, select multiple reference points of the teaching action, and fit at least two functions through the relationship between movement and time: attitude function , a displacement function; generating a control program so that the robot can realize the teaching action according to the posture function and the displacement function. In the present invention, the teaching action function is simplified to reduce the calculation amount of point selection, and the driver program is generated by collecting the teaching action of the target, which reduces the need for manual participation, and has the advantages of high intelligence and high imitation reduction.

Description

technical field [0001] The present invention relates to the technical field of robots, in particular to a vision acquisition method based on target posture deep learning, a learning system and a storage medium. Background technique [0002] A robot (Robot) is a high-tech product with a preset program or principled program inside. After receiving a signal or instruction, it can judge and take actions to a certain extent, such as moving, picking up, and swinging limbs. The task of the robot is mainly to assist or even replace the work of humans in some occasions. The actions and information judgments involved in the actual work scene are often very complicated, and it is difficult to record them all in the robot in the form of a program in advance. Knowledge, self-learning to improve adaptability and intelligence level, that is, robot learning, has become a very hot research focus in the robot industry. [0003] In the prior art, the process of the robot simulating the human ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): B25J9/00B25J9/16
CPCB25J9/0081B25J9/1697
Inventor 刘培超刘主福郎需林
Owner SHENZHEN YUEJIANG TECH CO LTD
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More