Robot scene self-adaptive pose estimation method based on RGB-D camera

A technology for pose estimation and robotics, applied in instrumentation, computing, image data processing, etc., can solve problems such as algorithm failure, sparse regional scene features, and inability to obtain pose estimation

Pending Publication Date: 2019-09-10
HUNAN UNIV +1
View PDF3 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the defect of this type of method is that it relies heavily on the selection of feature points. First, the mismatched point pairs in the feature point set will have a serious impact on the initial value of the 3D estimation; secondly, the algorithm is only effective for scenes with many image feature points. If the scene If the feature points are sparse,

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot scene self-adaptive pose estimation method based on RGB-D camera
  • Robot scene self-adaptive pose estimation method based on RGB-D camera
  • Robot scene self-adaptive pose estimation method based on RGB-D camera

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0111] The present invention will be further described below with reference to the drawings and embodiments.

[0112] The RGB-D camera can simultaneously acquire a two-dimensional color image I and a three-dimensional point cloud D of the scene, wherein the two-dimensional color image I t (u,v) and 3D point cloud D t (u,v) According to the two-dimensional color image pixel point one-to-one correspondence, that is, the pixel point I in the u-th row and v-th column in the two-dimensional color image t (u,v) and 3D point cloud D (u,v) (x,y,z) corresponding to the three-dimensional point cloud D (u,v) (x, y, z) refers to the depth information of the pixels in the u-th row and the v-th column in the two-dimensional color image; the three-dimensional point cloud D refers to the three-dimensional point set corresponding to all the pixels of the two-dimensional color image.

[0113] Such as figure 1 Shown is a flowchart of the present invention. A robot scene adaptive pose estimation method...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a robot scene self-adaptive pose estimation method based on RGB-D camera. scene two-dimensional color image information of adjacent frames and space depth information corresponding to pixels of the two-dimensional color image are acquired based on a RGB-D camera; when the two-dimensional color image feature points are sufficient, ORB operators are adopted to extract features, the matching strategy provided by the invention is adopted to carry out accurate matching, and three-dimensional pose estimation is solved based on the pose estimation algorithm of the matching feature points; when the feature points are insufficient, the improved ICP algorithm provided by the invention is adopted to solve the three-dimensional pose estimation; then, a complete switching criterion is designed to fuse the two pose estimation methods; finally, pose estimation obtained through the two methods is optimized through a light beam adjustment algorithm, and finally smooth and accurate three-dimensional pose estimation is obtained. The three-dimensional pose estimation algorithm has the outstanding advantages of high robustness, high precision, small calculation amount, adaptability to different scenes and the like.

Description

technical field [0001] The invention belongs to the field of robot control, in particular to a robot scene adaptive pose estimation method based on an RGB-D camera. Background technique [0002] Real-time robust and high-precision 3D pose estimation is one of the research difficulties and hotspots in the field of robotics. Its goal is to estimate the change in 3D space pose of the robot at two adjacent moments in real time. It is a robot SLAM (real-time positioning and map Creation), motion tracking, AR (Augmented Reality) and other core content. Traditional navigation systems based on inertial sensors are widely used in pose estimation, but there are problems such as drift and error accumulation, so the accuracy and reliability of the corresponding pose estimation are low. Compared with the inertial navigation system, the vision-based pose estimation does not have the problem of physical drift, and the cumulative error can be effectively eliminated through the global visua...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/73
CPCG06T7/73G06T2207/10028G06T2207/10024Y02T10/40
Inventor 余洪山付强林鹏孙炜杨振耕赖立海陈昱名吴思良
Owner HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products