Object pose estimation method and system based on deformation convolution network

A convolutional network and pose estimation technology, applied in the field of computer vision, can solve problems such as reducing accuracy, reducing algorithm efficiency, and time-consuming, and achieving the effects of improving efficiency, simplifying estimation method steps, and improving accuracy and robustness.

Active Publication Date: 2020-03-24
TONGJI UNIV
View PDF2 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The existing class has the following problems: the method is difficult to accurately match the target object when the background is cluttered and the objects are mixed and stacked, and the robustness is not high; the template matching time will increase sharply with the increase in the number of templates rising, it is difficult to meet the real-time requirements
[0007] The existing methods of this kind have the following problems: the entire network convolution kernel is a standard convolution kernel, when the background of the object whose pose is estimated is cluttered and stacked with each other, the information used to estimate the pose is in addition to the object In addition to itself, it will inevitably contain information about the background and other stacked objects, which has a great impact on feature extraction, thereby reducing the accuracy of object pose estimation. pose refinement to correct the predicted pose, but the pose refinement process takes a long time, which reduces the efficiency of the algorithm

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Object pose estimation method and system based on deformation convolution network
  • Object pose estimation method and system based on deformation convolution network
  • Object pose estimation method and system based on deformation convolution network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0049] An object pose estimation method based on deformable convolutional network, such as figure 1 ,include:

[0050] S1. Obtain the color image and depth image of the target object, input the color image of the target object into the trained instance segmentation network, and obtain the instance segmentation result;

[0051] S2. Cut out the color image block and the depth image block containing the target object from the color image and the depth image respectively according to the instance segmentation result, and convert the depth image block into a point cloud represented by three channels;

[0052] S3. Set the pixel value of the color image block and point cloud that does not contain the target object to 0, and then input the trained deformation convolution network to obtain the target object pose estimation result. The output of the deformation convolution network includes multiple targets. The object pose value and the corresponding confidence, the target object pose ...

Embodiment 2

[0065] An object pose estimation system based on a deformable convolutional network corresponding to Embodiment 1, such as figure 2 , including an RGB-D camera, an instance segmentation module, an object cropping module, a transformation processing module and a deformable convolution module;

[0066] The RGB-D camera acquires the color image and depth image of the target object;

[0067] The instance segmentation module segments the color image to obtain the instance segmentation result;

[0068] The target cropping module cuts out the color image block and the depth image block containing the target object from the color image and the depth image respectively according to the instance segmentation result;

[0069] The conversion processing module converts the depth image block into a point cloud represented by a three-channel image, and sets the pixel value of a region not containing the target object in the color image block and the point cloud image to 0;

[0070] The de...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an object pose estimation method based on a deformation convolution network, and the method comprises the steps: S1, obtaining a color image and a depth image of a target object, inputting the color image of the target object into a trained instance segmentation network, and obtaining an instance segmentation result; S2, respectively cutting out a color image block and a depth image block containing the target object from the color image and the depth image according to the instance segmentation result, and converting the depth image block into a point cloud represented by a three-channel image; S3, setting a pixel value of an area, which does not contain the target object, in the color image block and the point cloud image to be 0, and then inputting the pixel value into the trained deformation convolution network to obtain a target object pose estimation result; and the receptive field of the deformation convolution network extraction features is concentratedin the region where the target object is distributed on the color image block and the point cloud image. Compared with the prior art, the method has the advantages of high precision, high efficiencyand the like.

Description

technical field [0001] The present invention relates to the field of computer vision, in particular to a method and system for estimating the pose of an object based on a deformable convolutional network. Background technique [0002] Computer vision-based object six-degree-of-freedom pose (a total of six degrees of freedom for the three-dimensional translation and rotation transformation parameters of the object relative to the camera coordinate system) estimation technology enables the robot to perceive the surrounding environment from the three-dimensional level, which is the key to realizing robot grasping and dexterity. The key technology of operation is of great significance to promote the application of service robots and industrial robots. In addition, this technology also has broad application prospects in augmented reality, virtual reality technology and other fields. [0003] The existing object pose estimation techniques mainly include the following types: [00...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06N3/04G06T7/11
CPCG06T7/73G06T7/11G06T2207/10012G06N3/045
Inventor 陈启军周光亮王德明汪晏颜熠刘成菊
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products