Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Pose Estimation Method for Grasping Unknown Objects Based on Mixed Information Input Network Model

A technology of mixed information and unknown objects, applied in the field of autonomous robot grasping, can solve problems such as low efficiency, difficult autonomous grasping, and difficulty in obtaining 3D information of objects, and achieve the effect of accurate grasping

Active Publication Date: 2021-05-07
HARBIN INST OF TECH
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for a large number of objects of different shapes, the three-dimensional information of the object is difficult to obtain, the search calculation of the grasping point is relatively large, and the efficiency is also very low. It is difficult to realize the robot's autonomous grasp of unknown objects for practical application.
[0003] At present, there is still a big gap between the intelligence of robots and humans. It is still difficult to effectively identify the grasping area of ​​​​unknown objects and realize the grasping of unknown objects autonomously.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Pose Estimation Method for Grasping Unknown Objects Based on Mixed Information Input Network Model
  • A Pose Estimation Method for Grasping Unknown Objects Based on Mixed Information Input Network Model
  • A Pose Estimation Method for Grasping Unknown Objects Based on Mixed Information Input Network Model

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0070] Specific implementation mode 1: The specific implementation mode of the present invention will be further elaborated in conjunction with the accompanying drawings. Such as figure 1 As shown, it is a flow chart of a method for catching pose estimation of an unknown object based on a convolutional neural network model of the present invention, which is mainly completed by the following steps:

[0071] Step 1: Image Preprocessing

[0072] 1) Depth information preprocessing

[0073] The mixed information input of this patent includes the color, depth and normal vector channel information of the object image, and the data comes from the Kinect depth sensor of Microsoft Corporation. Depth channel information usually has a lot of image noise due to shadows, object reflections, etc., so that the depth values ​​​​of many pixels on the depth image are missing, and usually appear in the form of large areas. Therefore, when using the traditional image filtering method to try to ...

Embodiment

[0118] combine Figure 1 to Figure 5 To illustrate this embodiment, the steps of the unknown object grasping and recognition method based on the convolutional neural network model are as follows:

[0119] Step 1, first preprocessing the RGB-D image. figure 2 The left side of the middle is the color image of the original object, in which the rectangular frame is the rectangular area that needs to be judged for graspability, and the direction of the long axis of the rectangle is the closing direction of the robot's grasping. The first line on the right side of the figure is the rectangular area image after the image rotation operation, and the second and third lines are the results of the color image and the normal vector image after the image size scaling and whitening processing.

[0120] Step 2, construct as image 3 The structure of the hybrid information fusion model, and build a deep convolutional neural network model.

[0121] Step 3: Input the training data into the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for estimating the grasping pose of an unknown object based on a mixed information input network model, which belongs to the field of robot autonomous grasping. The invention aims to realize the rapid and effective grasping of unknown objects by robots. Preprocess the image mixed information in the training image data set; construct an information fusion mechanism based on mixed information input, and build a neural network model; train the network model parameters including the mixed information fusion mechanism, and obtain an optimized mixed information input network model; use the object segmentation technology based on RGB‑D images to segment the scene images collected by the sensor; use the candidate area generation mechanism based on feedback information to search for the best grasping area on the object; use the depth The information estimates the grasping position and grasping posture of the robot in the optimal grasping area, and then obtains the grasping pose when grasping the object. This method is beneficial for the robot to quickly and accurately realize the autonomous grasping of unknown objects.

Description

technical field [0001] The invention belongs to the field of autonomous grasping of robots, and relates to a method for estimating the pose of an unknown object grasping based on a mixed information input network model. Background technique [0002] In the field of intelligent robots, autonomous grasping of unknown objects by robots is a key capability of intelligent robots. Robotic grasping has been studied for decades, and many achievements have been made. But current robots can take weeks to reprogram to perform a complex new grasping task, making retooling modern manufacturing lines expensive and slow. Moreover, most of the robots are used in a specific environment to perform grasping operations on specific known objects. For unknown objects placed in different poses in an uncertain environment, the current technology is not yet mature for the robot to independently determine the grasping position of the grasped object and the grasping pose of the grasping gripper. Tr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/73
CPCG06T2207/20081G06T2207/20084G06T7/75
Inventor 王滨王志超刘宏赵京东王栋
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products