Visual identification and positioning method based on RGB-D camera

A technology of visual recognition and positioning methods, applied in image analysis, image data processing, instruments, etc., can solve the problems of convex hull contour errors, background parts included, slow running speed, etc., and achieve low cost, high calculation efficiency, and calculation amount small effect

Active Publication Date: 2017-04-19
SOUTH CHINA UNIV OF TECH
View PDF4 Cites 36 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0002] At present, most of the existing object recognition and positioning systems based on multi-eye color image cameras obtain the position of each pixel in space by stereo matching the images collected by different sensors, which are costly and slow. , system complexity and other issues
[0003] Object edge segmentation is mostly realized by the method of convex hull extraction based on the image of the color camera. This processing method needs to consider the appearance color of the object, and it is easy to cause misjudgment when the background color is similar to the object, and the method of convex hull extraction is also difficult. There is an error in the contour of the convex hull of the object and the problem of including the background part

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual identification and positioning method based on RGB-D camera
  • Visual identification and positioning method based on RGB-D camera
  • Visual identification and positioning method based on RGB-D camera

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0048] This embodiment provides a visual recognition and positioning method based on an RGB-D camera, such as figure 1 As shown, it is mainly composed of three-dimensional point cloud collection, plane extraction, object segmentation, object feature extraction and matching, and object positioning. The method specifically includes the following steps:

[0049] Step 1, convert the color image and depth image of the object into a three-dimensional point cloud image after being collected by the Microsoft Kinect camera sensor;

[0050] In this step, Microsoft's Kinect sensor can collect RGB-d images, through the built-in API function or third-party function libraries such as Open Natural Interaction (Open NI), Point Cloud Library (Point Cloud Library, PCL), The 3D point cloud image can be obtained.

[0051] Step 2, performing corresponding normal vector calculation for each point of the 3D point cloud image obtained in step 1;

[0052] Such as figure 2 Shown, is the cross product...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a visual identification and positioning method based on an RGB-D camera. The method comprises the following steps of 1) using a Microsoft Kinect camera sensor to carry out collection of a color image and a depth image and then converting into three-dimensional point cloud, and extracting a plane in a scene; 2) after the plane in the step 1) is extracted, carrying out object extraction and segmentation on the remaining point cloud; 3) carrying out identification and matching on a point cloud set of an object acquired from the step 2) respectively; and 4) though calculating the object point cloud acquired from the step 2), realizing object positioning. In the method, based on a three-dimensional point cloud image collected by a Microsoft RGB-D sensor Kinect II, object identification and positioning are performed, and method does not relate to complex operations of matching of a plurality of images during object positioning and the like. Calculating efficiency is greatly increased, and simultaneously the method possesses advantages that real-time performance is high and the method is suitable for a complex environment of a daily life.

Description

technical field [0001] The invention relates to the field of machine vision recognition and positioning, in particular to a visual recognition and positioning method based on an RGB-D camera. Background technique [0002] At present, most of the existing object recognition and positioning systems based on multi-eye color image cameras obtain the position of each pixel in space by stereo matching the images collected by different sensors, which are costly and slow. , system complexity and other issues. [0003] Object edge segmentation is mostly realized by the method of convex hull extraction based on the image of the color camera. This processing method needs to consider the appearance color of the object, and it is easy to cause misjudgment when the background color is similar to the object, and the method of convex hull extraction is also difficult. There are problems with the wrong convex hull outline of the object and the inclusion of the background part. [0004] Com...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/70
Inventor 张智军张文康黄永前
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products