Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Indoor scene three-dimensional reconstruction method based on single depth vision sensor

A depth sensor and indoor scene technology, applied in the field of 3D reconstruction of indoor scenes based on a single depth vision sensor, can solve problems affecting the reconstruction effect of 3D models of indoor scenes

Inactive Publication Date: 2015-12-30
TIANJIN UNIVERSITY OF TECHNOLOGY
View PDF1 Cites 74 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the early kinectfusion system uses a planar voxel grid, which can only be used in a small volume range; although the visual odometry algorithm can improve the accuracy of the kinectfusion system, it can only be used in a relatively narrow space and compared with the camera motion trajectory In simple cases, complex camera motion paths will affect the 3D model reconstruction effect of the entire indoor scene

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor scene three-dimensional reconstruction method based on single depth vision sensor
  • Indoor scene three-dimensional reconstruction method based on single depth vision sensor
  • Indoor scene three-dimensional reconstruction method based on single depth vision sensor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0129] Embodiments of the present invention are described in further detail below:

[0130] A method for 3D reconstruction of indoor scenes based on a single depth vision sensor, such as figure 1 shown, including the following steps:

[0131] Step 1. Jointly calibrate the color camera and the depth camera, solve the internal parameter matrix K, internal parameters, and external parameters of the depth and color cameras, and calibrate the depth data; use a single depth sensor device to collect depth data and RGB data of indoor scenes ;

[0132] Such as figure 2 As shown, this step 1 includes the following specific steps:

[0133] 1.1. Extract the upper corner points of the calibration chessboard image taken by the color camera and depth camera as calibration points, perform camera calibration, and solve the internal parameter matrix K and external parameters of the depth camera and color camera;

[0134] (1) Extract the upper corner points of the calibration chessboard ima...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to an indoor scene three-dimensional reconstruction method based on a single depth vision sensor. The method is technically characterized by including the following steps of firstly, continuously scanning a whole indoor scene through the single depth vision sensor; secondly, conducting preprocessing including denoising, hole repairing and the like on collected depth data to obtain smooth depth data; thirdly, calculating point cloud data corresponding to the current depth frame according to the depth data collected in the second step; fourthly, conducting registration on point cloud obtained through different viewpoint depth frames to obtain complete point cloud of the indoor scene; fifthly, conducting plane fitting, achieving segmentation of the special point cloud, and establishing an independent and complete three-dimensional model of each object in the indoor scene. Scanning devices used by the method are simple; scanned data information is comprehensive, and the point cloud registration accuracy calculation efficiency is effectively improved; finally, a complete and high-quality three-dimensional model set with a geographic structure and a color map can be established for the indoor scene.

Description

technical field [0001] The invention belongs to the technical field of three-dimensional reconstruction of indoor scenes, in particular to a method for three-dimensional reconstruction of indoor scenes based on a single depth vision sensor. Background technique [0002] Building a high-quality 3D model of an indoor scene, especially creating an independent 3D model for each object in the room is a very challenging task. At present, many 3D reconstruction methods for indoor scenes focus on reconstructing local models in indoor scenes, resulting in the method itself: easy to lose details of many indoor scenes, requiring cumbersome user interaction operations, and requiring large-scale reconstruction such as laser scanners. Disadvantages such as expensive hardware equipment. [0003] Commercial depth cameras can realize 3D model reconstruction of objects in the scene, but building a 3D model of an indoor scene is different from building a 3D model of a single object. Within th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T7/00
Inventor 汪日伟鲍红茹温显斌张桦陈霞
Owner TIANJIN UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products