Target object three-dimensional color point cloud generation method based on KINECT

A target object, color point cloud technology, applied in the field of virtual reality, can solve the problem that the point cloud data cannot completely express the target object, etc., achieve the effect of low memory space requirements, accurate registration results, and improved search speed

Inactive Publication Date: 2016-10-05
HEFEI UNIV OF TECH
View PDF3 Cites 59 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention is to solve the problem that the point cloud data acquired directly based on Kinect cannot fully express the target object, and proposes a method of how to obtain the complete three-dimensional color point cloud of the target object

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Target object three-dimensional color point cloud generation method based on KINECT
  • Target object three-dimensional color point cloud generation method based on KINECT
  • Target object three-dimensional color point cloud generation method based on KINECT

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention carries out according to following 4 steps when carrying out three-dimensional color point cloud generation to target object (such as figure 1 , figure 2 shown):

[0034] Step 1: Obtain an RGBD image containing the target object.

[0035] Kinect is used to take a shot, and the obtained RGBD image (RGB image and depth image, collectively referred to as RGBD image) only contains the local area information of the target object visible from the camera perspective. In order to obtain the complete color information and depth information of the target object, it is necessary to obtain from Shoot the target object from different angles. Use Kinect to shoot around the target object for a week, and obtain multiple RGBD images to contain the complete information of the target object. At the same time, during the shooting process, record the positional relationship (rotation and translation) of two adjacent images as a point cloud registration The initial ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a target object three-dimensional color point cloud generation method based on Kinect. Firstly a set of RGBD images are photographed around a target object so that the set of RGBD images are enabled to include complete information of the target object; then as for each RGBD image, Ostu segmentation is performed on a depth image and then a foreground mark is acquired and acts as the input of a Grab Cut algorithm, the RGB image is segmented again and then the accurate area of the target object is acquired, and background information is removed; registration is performed on adjacent point cloud fragments by using an improved ICP algorithm so that a transformation relation matrix between the point cloud fragments is acquired; and finally point clouds are spliced by using the transformation relation matrix between the point cloud fragments, and down-sampling is performed to reduce redundancy so that the complete three-dimensional color point cloud data of the target object can be acquired.

Description

technical field [0001] The invention relates to point cloud processing and image processing technology, belongs to the field of virtual reality, and specifically relates to a method for generating a three-dimensional color point cloud of a target object based on an RGBD image obtained by Kinect. Background technique [0002] 3D object reconstruction is a research hotspot in computer vision, and point cloud data is the basis for 3D reconstruction. The acquisition methods of point cloud data include: acquisition based on 3D scanning equipment, calculation based on multiple images, and acquisition based on somatosensory interactive devices such as Kinect. Because 3D scanning equipment is often expensive, and the acquired point cloud data is huge, it takes a long time to process. In addition, the point cloud data acquired by 3D scanning equipment often does not contain color information. Although some current 3D scanners have Provides the function of taking pictures at the same...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/00G06T5/50
CPCG06T5/50
Inventor 余烨路强薛峰李冰飞张小魏
Owner HEFEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products