Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A method for 3D reconstruction of indoor scenes based on a single depth vision sensor

A technology for depth vision and indoor scenes, which is applied in the field of 3D reconstruction of indoor scenes, and can solve problems such as the reconstruction effect of 3D models affecting indoor scenes.

Inactive Publication Date: 2018-04-13
TIANJIN UNIVERSITY OF TECHNOLOGY
View PDF1 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the early kinectfusion system uses a planar voxel grid, which can only be used in a small volume range; although the visual odometry algorithm can improve the accuracy of the kinectfusion system, it can only be used in a relatively narrow space and compared with the camera motion trajectory In simple cases, complex camera motion paths will affect the 3D model reconstruction effect of the entire indoor scene

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method for 3D reconstruction of indoor scenes based on a single depth vision sensor
  • A method for 3D reconstruction of indoor scenes based on a single depth vision sensor
  • A method for 3D reconstruction of indoor scenes based on a single depth vision sensor

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0132] Embodiments of the present invention are described in further detail below:

[0133] A method for 3D reconstruction of indoor scenes based on a single depth vision sensor, such as figure 1 shown, including the following steps:

[0134] Step 1. Jointly calibrate the color camera and the depth camera, solve the internal parameter matrix K, internal parameters, and external parameters of the depth and color cameras, and calibrate the depth data; use a single depth sensor device to collect depth data and RGB data of indoor scenes ;

[0135] Such as figure 2 As shown, this step 1 includes the following specific steps:

[0136] 1.1. Extract the upper corner points of the calibration chessboard image taken by the color camera and depth camera as calibration points, perform camera calibration, and solve the internal parameter matrix K and external parameters of the depth camera and color camera;

[0137] (1) Extract the upper corner points of the calibration chessboard ima...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a method for three-dimensional reconstruction of an indoor scene based on a single depth vision sensor, and its technical characteristics include the following steps: Step 1, using a single depth vision sensor to continuously scan the entire indoor scene; Step 2, removing the collected depth data Noise, hole repair and other preprocessing to obtain smooth depth data; step 3, according to the depth data collected in step 2, calculate the point cloud data corresponding to the current depth frame; step 4, the point cloud obtained from the depth frame of different perspectives Perform registration to obtain a complete point cloud of the indoor scene; step 5, perform plane fitting to realize the segmentation of the special point cloud, and establish an independent and complete 3D model of each object in the indoor scene. The scanning equipment used in the present invention is simple; the scanned data information is comprehensive and effectively improves the calculation efficiency of point cloud registration accuracy; finally, a complete high-quality three-dimensional model collection with geometric structure and color maps can be established for indoor scenes.

Description

technical field [0001] The invention belongs to the technical field of three-dimensional reconstruction of indoor scenes, in particular to a method for three-dimensional reconstruction of indoor scenes based on a single depth vision sensor. Background technique [0002] Building a high-quality 3D model of an indoor scene, especially creating an independent 3D model for each object in the room is a very challenging task. At present, many 3D reconstruction methods for indoor scenes focus on reconstructing local models in indoor scenes, resulting in the method itself: easy to lose details of many indoor scenes, requiring cumbersome user interaction operations, and requiring large-scale reconstruction such as laser scanners. Disadvantages such as expensive hardware equipment. [0003] Commercial depth cameras can realize 3D model reconstruction of objects in the scene, but building a 3D model of an indoor scene is different from building a 3D model of a single object. Within th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/00G06T7/00
Inventor 汪日伟鲍红茹温显斌张桦陈霞
Owner TIANJIN UNIVERSITY OF TECHNOLOGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products