Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Semi-supervised object pose estimation method combining generated data and unlabeled data

A technology for pose estimation and labeling data, applied in the field of robot vision, can solve the problems of insufficient accuracy and uneliminated differences in the field, and achieve the effect of improving flexibility and accuracy

Active Publication Date: 2021-07-16
TONGJI UNIV
View PDF9 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although pose labeling is not required, this method still needs to use the generated color images for supervision, the influence of domain differences has not been eliminated, and the accuracy of this method cannot meet the needs of practical applications

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semi-supervised object pose estimation method combining generated data and unlabeled data
  • Semi-supervised object pose estimation method combining generated data and unlabeled data
  • Semi-supervised object pose estimation method combining generated data and unlabeled data

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The present invention will be described in detail below in conjunction with the accompanying drawings and specific embodiments. This embodiment is carried out on the premise of the technical solution of the present invention, and detailed implementation and specific operation process are given, but the protection scope of the present invention is not limited to the following embodiments.

[0033] This embodiment provides a semi-supervised object pose estimation method based on generated data and unlabeled real data. The schematic diagram of the method is as follows figure 1 As shown, it specifically includes the following steps:

[0034] S1, using the CAD model of the object to generate object point cloud data with pose tags;

[0035] S2. Obtain the unlabeled color image and depth image of the target object, input the color image to the trained instance segmentation network, obtain the instance segmentation result, and obtain the point cloud of the target object from t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a semi-supervised object pose estimation method combining generated data and unlabeled data. The method comprises the following steps: 1) generating point cloud data with a pose label, namely the generated data; 2) obtaining a color image and a depth image of a target object without labels, inputting the color image into the trained instance segmentation network to obtain an instance segmentation result, and obtaining a point cloud of the target object from the depth image according to the segmentation result, namely real data without labels; 3) in each training period, performing supervised training on the pose estimation network model by using the generated data, and performing self-supervised training on the pose estimation network model by using real data without labels; and 4) after each training period is finished, calculating the accuracy of the pose estimation network model by adopting part of real data. Compared with the prior art, the method mainly solves the problem that the 6D pose label is difficult to obtain, and accurate object pose estimation can be realized only by using synthetic data and unlabeled real data.

Description

technical field [0001] The invention relates to the field of robot vision, in particular to a semi-supervised object pose estimation method combining generated data and unlabeled data. Background technique [0002] The object pose estimation technology based on computer vision is the key technology to realize robot grasping and dexterous operation. It can improve the environment and task adaptability of the robot, broaden the application field of the robot, and improve the application of the robot in intelligent manufacturing, warehousing logistics, home service and other scenarios. The flexibility and application performance in it are of great significance. In addition, this technology also has broad application prospects in the fields of autonomous driving, augmented reality, and virtual reality. [0003] In recent years, with the vigorous development of deep learning technology, object pose estimation based on deep learning has achieved good results. In unstructured sce...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/73G06T7/11G06N3/08
CPCG06T7/74G06T7/11G06N3/08G06T2207/10028G06T2207/20081Y02P90/30
Inventor 陈启军周光亮颜熠王德明刘成菊
Owner TONGJI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products