Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Indoor scene 3D reconstruction method based on Kinect

An indoor scene and 3D reconstruction technology, applied in the field of computer vision, can solve the problems of point cloud models with many redundant points, complex algorithm calculations, and high configuration requirements, so as to avoid point cloud redundancy, few redundant points, and low cost low effect

Active Publication Date: 2017-06-06
XIDIAN UNIV
View PDF6 Cites 79 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] 3D reconstruction has been studied for a long time in the field of scientific research, but due to the high cost of the required equipment, it has not yet reached the level of popularization
[0007] The document "Henry P, Krainin M, Herbst E, et al.RGB-D mapping: Using depth cameras for dense 3D modeling of indoor environments[C] / / RSS Workshop on RGB-D Cameras.2010." proposed based on SIFT (scale Invariant feature transformation) feature matching positioning and TORO (Tree-basedNetwork Optimizer) optimization algorithm for indoor scene 3D reconstruction system, this system uses depth data and color image data, uses ICP algorithm combined with SIFT features in color images to compare the points of two frames The cloud data is registered, and the TORO algorithm is used to obtain the global point cloud data, which is an optimized algorithm for SLAM. For indoor scenes with inconspicuous features or even dim light, 3D reconstruction can be performed relatively accurately. However, the calculation of the above algorithm Complicated, slower to rebuild
[0008] The document "Fioraio N, Konolige K. Realtime visual and point cloud SLAM[C] / / RSS Workshop on RGB-D Cameras.2011." proposed the RGBD-SLAM algorithm, which uses the RGB-D sensor to obtain depth data and color images Data, use k-d tree or projection method to find corresponding points in two frames of point cloud data, use ICP algorithm based on corresponding points to realize point cloud data registration, use g2o—an efficient nonlinear least squares optimizer for global Optimization, to achieve a better reconstruction effect, there are still problems of complex calculations and slow reconstruction speed
[0009] The calculation of the above two algorithms is complex, the configuration requirements are high, and the point cloud model has many redundant points

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor scene 3D reconstruction method based on Kinect
  • Indoor scene 3D reconstruction method based on Kinect
  • Indoor scene 3D reconstruction method based on Kinect

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0038] The existing 3D reconstruction technology has been widely used in robot navigation, industrial measurement, virtual interaction and other fields. To achieve better results, most of the algorithms in the prior art still have problems such as large amount of calculation, slow speed, and redundant points. The present invention studies at this present situation, proposes a kind of indoor scene three-dimensional reconstruction method based on Kinect, see figure 1 , the indoor scene three-dimensional reconstruction method of the present invention comprises the following steps:

[0039]Step 1. Denoising and downsampling of depth data: First, set the timer t and start timing. The timer is used to decide when to stop acquiring point cloud data and perform global point cloud rendering. The timer can be selected according to the size of the scene. The Kinect camera is used to obtain the depth data of one frame of objects in the indoor scene, and the joint bilateral filtering meth...

Embodiment 2

[0053] The method for three-dimensional reconstruction of an indoor scene based on Kinect is the same as that in Embodiment 1, step 1 uses the multi-resolution depth data obtained by downsampling, and is used to calculate the point cloud registration transformation matrix in step 4.2, specifically including:

[0054] 4.2.1 Use the ICP algorithm to calculate the point cloud registration matrix using the lowest resolution depth data and predicted point cloud data.

[0055] 4.2.2 Then, on the basis of this point cloud registration matrix, use the higher-resolution depth data and predicted point cloud data to calculate step by step to obtain a more accurate point cloud registration transformation matrix, and use it to update the current Point cloud registration matrix.

[0056] In the calculation, the present invention first uses low-resolution depth data to initially calculate the point cloud registration matrix, and then based on this matrix, uses higher-resolution depth data to...

Embodiment 3

[0059] The indoor scene 3D reconstruction method based on Kinect is the same as embodiment 1-2, adopting TSDF algorithm to carry out point cloud fusion in step 4.3, including:

[0060] 4.3.1 When using the TSDF algorithm, a cube grid is used to represent the 3-dimensional space, and each grid in the cube stores the distance D and weight W from the grid to the surface of the object model.

[0061] The present invention adopts TSDF algorithm, and the main idea of ​​this method is to set up a virtual cube (Volume) in the graphics card, and the side length is L. In this example, the side length L of the virtual cube is set to 2 meters, and then the cube is divided into N ×N×N voxel (Voxel), in this example, set N to 512, and the side length of each voxel is L N , each voxel stores its distance D to the nearest surface of the object and its weight W. This example performs 3D reconstruction of a cabinet in the room.

[0062] 4.3.2 At the same time, positive and negative are used t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an indoor scene 3D reconstruction method based on Kinect and solves the technical problem of the real-time reconstruction of an indoor scene 3D model and avoidance of excessive redundant points. The method comprises steps of: obtaining the depth data of an object by using Kinect and de-nosing and down-sampling the depth data; obtaining the point cloud data of a current frame and calculating the vector normal of each point in the frame; using a TSDF algorithm to establish a global data cube, and using a ray casting algorithm to calculate predicted point cloud data; calculating a point cloud registration matrix by using an ICP algorithm and the predicted point cloud data, fusing the obtained point cloud data of each frame into the global data cube, and fusing the point cloud data frame by frame until a good fusion effect is obtained; rendering the point cloud data with an isosurface extraction algorithm and constructing the 3D model of the object. The method improves the registration speed and the registration precision, is fast in fusion speed and few in redundancy points, and can be used for real-time reconstruction of the indoor scene.

Description

technical field [0001] The invention belongs to the technical field of computer vision, in particular to a Kinect-based three-dimensional reconstruction method for an indoor scene. The invention can be used in the fields of robot navigation, industrial measurement, virtual interaction and the like. Background technique [0002] 3D reconstruction technology is a hotspot and difficulty in frontier fields such as computer vision, artificial intelligence, and virtual reality. It is also one of the major challenges faced by human beings in basic research and applied research. Industrial measurement, immersive virtual interaction and other fields. [0003] Three-dimensional reconstruction has been studied for a long time in the field of scientific research, but due to the high cost of the required equipment, it has not yet reached the level of popularization. With the promotion and use of Microsoft Kinect somatosensory cameras, the cost has been greatly reduced, so that ordinary...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/33G06T7/55G06T15/00G06T17/30
CPCG06T15/005G06T17/30G06T2200/08G06T2207/10028G06T2207/10048
Inventor 卢朝阳丹熙方李静矫春龙
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products