Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robust real-time 3D reconstruction method based on consumer-grade camera

A real-time 3D, camera technology, applied in 3D modeling, image enhancement, image analysis, etc., can solve the problems of incompleteness, high computational cost, inaccurate model, etc., to achieve efficient noise, noise suppression, and real-time visualization of the reconstruction process. Effect

Active Publication Date: 2018-09-07
HARBIN INST OF TECH
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The present invention aims to solve the problems of high computational cost and inaccurate and incomplete reconstructed models in existing methods, and provides a robust real-time 3D reconstruction method based on consumer-grade cameras

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robust real-time 3D reconstruction method based on consumer-grade camera
  • Robust real-time 3D reconstruction method based on consumer-grade camera
  • Robust real-time 3D reconstruction method based on consumer-grade camera

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0028] Specific implementation mode one: combine Figure 1 to Figure 10 Describe this embodiment, the robust real-time 3D reconstruction method based on a consumer-grade camera in this embodiment is implemented in the following steps:

[0029] 1. During the movement of the camera, based on the current video frame of the camera as input, estimate the camera pose of each video frame in the scene coordinate system:

[0030] 2. Select the best key frame in the video frame for depth estimation;

[0031] 3. Use a fast and robust depth estimation algorithm to estimate the depth information of each video frame to obtain a depth map of each video frame:

[0032] 4. Transform the depth map of each video frame into a truncated symbol distance field, and incrementally fuse on voxels, and finally the initial triangular mesh surface, that is, a robust real-time 3D reconstruction method based on a consumer-grade camera is completed.

specific Embodiment approach 2

[0033] Specific implementation mode two: the difference between this implementation mode and specific implementation mode one is that the step one is specifically:

[0034] (a) Build a set of keyframe collections

[0035] During the movement of the camera, keyframe k is selected from the video frame according to the temporal distance and spatial distance threshold, each keyframe corresponds to an estimated camera pose, and all keyframes form a keyframe set

[0036] (b) Constructing a 3D graph

[0037] Three-dimensional map Contains point cloud data in where p i is a 3D point in the point cloud data, for base of The number of elements in the keyframe when a new keyframe is added to the keyframe collection , it is combined with the keyframe Perform stereo matching in other key frames, and generate new point cloud data to join point cloud Every three-dimensional point p in i It records its three-dimensional coordinates, normal direction, and pixel featur...

specific Embodiment approach 3

[0046] Specific implementation mode three: the difference between this implementation mode and specific implementation modes one or two is that the step two is specifically:

[0047] (1) Collection of key frames The key frames in are arranged in ascending order of the baseline size of the current frame, and the first M frames are selected to form a subset, and the key frame subset with the smallest angle with the current frame is selected. Assume that the camera center coordinates in the key frame collection are c 1 ,c 2 ,c 3 ... c n , the camera center coordinate of the current frame is c, the calculation method of the baseline between the current frame and the mth key frame is:

[0048]

[0049] (2) According to the size of the baseline, sort in ascending order, and select a subset of key frames according to the distance threshold T The T value is defined as twice the average distance between adjacent key frames, and the angle between the current frame and the key...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a robust real-time three-dimensional (3D) reconstruction method based on a consumer camera, and aims to solve the problems of high calculation cost and inaccurate and incomplete reconstructed model in the existing method. The method comprises the following steps: 1, estimating the camera pose of each video frame under a scene coordinate system on the basis that a current video frame of a camera is used as input in the camera moving process; 2, selecting an optimized key frame in the video frame for depth estimation; 3, estimating the depth information of each video frame by adopting a quick robust depth estimation algorithm to obtain a depth map of each video frame; and 4, converting the depth map of each video frame into an unblind distance field, parallel executing weighted average of TSDF on voxel, incrementally fusing the depth map of each video frame, and constructing a triangular mesh surface by a Marching cubes algorithm. The method is applied to the field of image processing.

Description

technical field [0001] The invention relates to a robust real-time three-dimensional reconstruction method based on a consumer-grade camera. Background technique [0002] With the popularity of mobile phones and digital cameras, it is becoming more and more convenient to obtain high-quality images. An urgent need is to use these image data to reconstruct the three-dimensional world we live in, including: objects, scenes and even the entire environment. In the existing image-based 3D reconstruction methods, the industrial camera equipment used is expensive and the calculation cost is relatively high. The reconstruction of a small scene generally requires several hours of processing time of a high-performance computer. However, sensor noise, occlusions, and illumination changes often cause failures in 3D reconstruction tasks, and these problems are often difficult to predict just by observing images. Because of these issues, models that take hours to rebuild are often impreci...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/55G06T17/30
CPCG06T17/30G06T2207/30244
Inventor 王宽全李兆歆左旺孟张磊
Owner HARBIN INST OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products