Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for estimating unknown object grabbing positions and posture on basis of mixed information input network models

A technology of mixing information and inputting network, applied in computing, image data processing, instruments, etc., can solve problems such as low efficiency, large amount of calculation for grasping point search, and difficulty in grasping unknown objects.

Active Publication Date: 2018-07-13
HARBIN INST OF TECH
View PDF3 Cites 41 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for a large number of objects of different shapes, the three-dimensional information of the object is difficult to obtain, the search calculation of the grasping point is relatively large, and the efficiency is also very low. It is difficult to realize the robot's autonomous grasp of unknown objects for practical application.
[0003] At present, there is still a big gap between the intelligence of robots and humans. It is still difficult to effectively identify the grasping area of ​​​​unknown objects and realize the grasping of unknown objects autonomously.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for estimating unknown object grabbing positions and posture on basis of mixed information input network models
  • Method for estimating unknown object grabbing positions and posture on basis of mixed information input network models
  • Method for estimating unknown object grabbing positions and posture on basis of mixed information input network models

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0070] Specific implementation mode 1: The specific implementation mode of the present invention will be further elaborated in conjunction with the accompanying drawings. Such as figure 1 As shown, it is a flow chart of a method for catching pose estimation of an unknown object based on a convolutional neural network model of the present invention, which is mainly completed by the following steps:

[0071] Step 1: Image Preprocessing

[0072] 1) Depth information preprocessing

[0073] The mixed information input of this patent includes the color, depth and normal vector channel information of the object image, and the data comes from the Kinect depth sensor of Microsoft Corporation. Depth channel information usually has a lot of image noise due to shadows, object reflections, etc., so that the depth values ​​​​of many pixels on the depth image are missing, and usually appear in the form of large areas. Therefore, when using the traditional image filtering method to try to ...

Embodiment

[0118] combine Figure 1 to Figure 5 To illustrate this embodiment, the steps of the unknown object grasping and recognition method based on the convolutional neural network model are as follows:

[0119] Step 1, first preprocessing the RGB-D image. figure 2 The left side of the middle is the color image of the original object, in which the rectangular frame is the rectangular area that needs to be judged for graspability, and the direction of the long axis of the rectangle is the closing direction of the robot's grasping. The first line on the right side of the figure is the rectangular area image after the image rotation operation, and the second and third lines are the results of the color image and the normal vector image after the image size scaling and whitening processing.

[0120] Step 2, construct as image 3 The structure of the hybrid information fusion model, and build a deep convolutional neural network model.

[0121] Step 3: Input the training data into the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for estimating unknown object grabbing positions and posture on the basis of mixed information input network models, and belongs to the field of autonomous grabbing ofrobots. By the aid of the method, unknown objects can be quickly and effectively grabbed by robots. The method include preprocessing image mixed information in training image data sets; constructinginformation fusion mechanisms on the basis of mixed information input and building neural network models; training network model parameters with the mixed information fusion mechanisms to obtain optimized mixed information input network models; segmenting objects capable of being grabbed for scene images by the aid of object segmentation technologies on the basis of RGB-D (red, green and blue-depth) images; searching the optimal grabbing regions on the objects by the aid of candidate region generation mechanisms on the basis of feedback information; estimating grabbing positions and grabbing posture of the robots in the optimal grabbing regions by the aid of depth information and further acquiring the grabbing positions and posture when the objects are grabbed. The scene images are acquired by sensors. The method has the advantage that the unknown objects can be advantageously quickly and accurately autonomously grabbed by the robots.

Description

technical field [0001] The invention belongs to the field of autonomous grasping of robots, and relates to a method for estimating the pose of an unknown object grasping based on a mixed information input network model. Background technique [0002] In the field of intelligent robots, autonomous grasping of unknown objects by robots is a key capability of intelligent robots. Robotic grasping has been studied for decades, and many achievements have been made. But current robots can take weeks to reprogram to perform a complex new grasping task, making retooling modern manufacturing lines expensive and slow. Moreover, most of the robots are used in a specific environment to perform grasping operations on specific known objects. For unknown objects placed in different poses in an uncertain environment, the current technology is not yet mature for the robot to independently determine the grasping position of the grasped object and the grasping pose of the grasping gripper. Tr...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73
CPCG06T2207/20081G06T2207/20084G06T7/75
Inventor 王滨王志超刘宏赵京东王栋
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products