Object six-degree-of-freedom pose estimation method based on color and depth information fusion

A depth information, pose estimation technology, applied in the field of robot vision, can solve the problems of reducing algorithm efficiency, template matching time increase, time-consuming, etc., to achieve accurate pose estimation, good expressive ability, and improved robustness.

Active Publication Date: 2020-05-19
TONGJI UNIV
View PDF2 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The first is the method based on template matching: this method performs feature matching on the image collected online and the template made offline, and determines the pose of the object according to the matched template; the existing methods of this type have the following problems: 1. When objects are mixed and stacked, it is difficult to accurately match the target object, and the robustness is not high; the template matching time will increase sharply with the increase of the number of templates, and it is difficult to meet the real-time requirements
[0005] The second is the method based on key point correspondence: this method first predicts the two-dimensional feature points in the color image, and then uses the corresponding relationship between the two-dimensional feature points and the points on the model, and uses the PnP algorithm to solve the six-degree-of-freedom pose of the object; now Some such methods mainly have the following problems: traditional key point detection methods are difficult to detect two-dimensional feature points on non-textured or weakly textured objects; methods based on deep learning are also susceptible to the mutual occlusion of objects for the detection of key points , resulting in less robustness in this case
[0006] The third is the method of direct regression: this method uses the color image as the input of the deep network, and directly returns the six-degree-of-freedom pose of the object; the existing methods of this type mainly have the following problems: when the object whose pose is estimated is located When the background is cluttered and stacked with each other, the information used to estimate the pose will inevitably include information about the background and other stacked objects besides the object itself, which has a great impact on feature extraction, thereby reducing the object position. Therefore, this type of method finally performs pose refinement on the preliminary estimation result to correct the predicted pose, but the pose refinement process takes a long time, which reduces the efficiency of the algorithm

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Object six-degree-of-freedom pose estimation method based on color and depth information fusion
  • Object six-degree-of-freedom pose estimation method based on color and depth information fusion
  • Object six-degree-of-freedom pose estimation method based on color and depth information fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

[0035] This embodiment provides a six-degree-of-freedom pose estimation method for objects based on fusion of color and depth information. The schematic diagram of the method is shown in figure 1 As shown, it specifically includes the following steps:

[0036] S1. Obtain the color image and depth image of the target object, input the color image to the trained instance segmentation network, and obtain the instance segmentation result;

[0037] S2. Cut out the color image block containing the target object from the color image according to the instance segmentation result, and obtain ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an object six-degree-of-freedom pose estimation method based on color and depth information fusion. The object six-degree-of-freedom pose estimation method comprises the following steps of: acquiring a color image and a depth image of a target object, and carrying out instance segmentation on the color image; cutting a color image block containing a target object from thecolor image, and acquiring a target object point cloud from the depth image; extracting color features from the color image block, and combining the color features to the target object point cloud atthe pixel level; carrying out point cloud processing on the target object point cloud to obtain a plurality of point cloud local region features fusing the color information and the depth informationand a global feature, and combining the global feature into the point cloud local region features; and predicting the pose and confidence of one target object by means of each local feature, and taking the pose corresponding to the highest confidence as a final estimation result. Compared with the prior art, color information and depth information are combined, the object pose is predicted by combining the local features and the global features, and the object six-degree-of-freedom pose estimation method has the advantages of being high in robustness, high in accuracy rate and the like.

Description

technical field [0001] The invention relates to the field of robot vision, in particular to a six-degree-of-freedom pose estimation method for an object based on fusion of color and depth information. Background technique [0002] Computer vision-based object six-degree-of-freedom pose (a total of six degrees of freedom for the three-dimensional translation and rotation transformation parameters of the object relative to the camera coordinate system) estimation technology enables the robot to perceive the surrounding environment from the three-dimensional level, which is the key to realizing robot grasping and dexterity. The key technology of operation is of great significance to promote the application of service robots and industrial robots. In addition, this technology also has broad application prospects in the fields of autonomous driving, augmented reality, and virtual reality technology. [0003] The existing object pose estimation techniques mainly include the follo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/33G06T7/11G06K9/62
CPCG06T7/33G06T7/11G06T2207/10024G06T2207/10028G06T2207/20221G06T2207/20081G06T2207/20084G06F18/253Y02P90/30
Inventor 陈启军周光亮王德明汪晏刘成菊
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products