Robot grasp pose estimation method based on object recognition depth learning model

A deep learning and pose estimation technology, applied in the field of computer vision, can solve the problems of complex algorithms and time-consuming three-dimensional information, and achieve the effect of improving computing efficiency, improving success rate, and reducing the range of point cloud segmentation

Inactive Publication Date: 2018-12-28
SHANGHAI JIEKA ROBOT TECH CO LTD
View PDF5 Cites 70 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] In view of the above-mentioned defects of the prior art, the technical problem to be solved by the present invention is to overcome the complex algorithm in the prio...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot grasp pose estimation method based on object recognition depth learning model
  • Robot grasp pose estimation method based on object recognition depth learning model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] Hereinafter, a number of preferred embodiments of the present invention will be introduced with reference to the accompanying drawings in the specification to make the technical content clearer and easier to understand. The present invention can be embodied by many different forms of embodiments, and the protection scope of the present invention is not limited to the embodiments mentioned in the text.

[0033] In the drawings, components with the same structure are represented by the same numerals, and components with similar structures or functions are represented by similar numerals. The size and thickness of each component shown in the drawings are arbitrarily shown, and the present invention does not limit the size and thickness of each component. In order to make the illustration clearer, the thickness of the components is appropriately exaggerated in some places in the drawings.

[0034] Such as figure 1 As shown, the robot used in the embodiment of the present invent...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a robot grasping pose estimation method based on an object recognition depth learning model, which relates to the technical field of computer vision. The method is based on anRGBD camera and depth learning. The method comprises the following steps: S1, camera parameter calibration and hand-eye calibration being carried out; S2, training an object detection object model; S3, establishing a three-dimensional point cloud template library of the target object; 4, identifying the types and position of each article in the area to be grasped; 5, fusing two-dimensional and three-dimensional vision information and obtaining a point cloud of a specific target object; 6, completing the pose estimation of the target object; S7: adopting an error avoidance algorithm based on sample accumulation to avoid errors; S8: Steps S4 to S7 being continuously repeated by the vision system in the process of moving the robot end toward the target object, so as to realize iterative optimization of the pose estimation of the target object. The algorithm of the invention utilizes a target detection YOLO model to carry out early-stage fast target detection, reduces the calculation amount of three-dimensional point cloud segmentation and matching, and improves the operation efficiency and accuracy.

Description

Technical field [0001] The present invention relates to the technical field of computer vision, in particular to a robot grasping pose estimation method based on an object recognition deep learning model. Background technique [0002] In recent years, with the continuous development of the computer vision field, machine vision technology has been widely used in various fields of manufacturing and service industries. The combination of machine vision technology and other traditional disciplines has also become increasingly close. It has had a significant impact in the fields of civil engineering, agriculture, medical treatment, and transportation. The application of machine vision in the mechanical field is particularly extensive. Among them, the robot grasping based on vision has also become This is a current research hotspot. [0003] The vision field can be divided into monocular vision, binocular vision and depth vision according to different sensors. Monocular vision is mainly...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/80G06T7/30
CPCG06T7/30G06T7/85
Inventor 李明洋王家鹏任明俊
Owner SHANGHAI JIEKA ROBOT TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products