Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A 3D scene reconstruction method based on depth learning

A 3D scene and deep learning technology, applied in 3D modeling, image analysis, image enhancement, etc., can solve the problem that the scale information cannot be effectively estimated based on the monocular camera, and achieve the effect of outdoor 3D reconstruction

Active Publication Date: 2019-03-12
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF8 Cites 56 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to overcome the technical defects that existing 3D scene reconstruction methods based on depth cameras cannot work in outdoor conditions and the method based on monocular cameras cannot effectively estimate scale information, and propose a 3D scene reconstruction method based on deep learning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A 3D scene reconstruction method based on depth learning
  • A 3D scene reconstruction method based on depth learning
  • A 3D scene reconstruction method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0064] The neural network used in the present embodiment 1 has included 28 neural layers, and its structure is as attached figure 2 shown in . The specific implementation steps of this embodiment are as follows figure 1 shown, from figure 1 As can be seen, the method of the present invention comprises the following steps:

[0065] Step A: Initialize the optimization graph;

[0066] Specific to this embodiment, that is, initialize the graph optimization tool g2o, select the solver and optimization algorithm used;

[0067] Step B: taking a color image;

[0068] Specifically to this embodiment, the image is captured by a color camera;

[0069] Take photos of the scene with a color camera, and the scene structure of the photo content should be as clear as possible. The corresponding picture is transferred to the program through the USB port;

[0070] Step C: Estimate scene depth;

[0071] Specifically in this embodiment, the depth structure of the scene is estimated throu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a three-dimensional scene reconstruction method based on depth learning, belonging to the technical field of depth learning and machine vision. The depth structure of the scene is estimated by convolution neural network, and the dense structure is refined by multi-view method. A full convolution residual neural network is trained to predict the depth map. Based on the color images taken from different angles, the depth map is optimized and the camera attitude is estimated by using epipolar geometry and dense optimization methods. Finally, the optimized depth map is projected to three-dimensional space and visualized by point cloud. It can effectively solve the problem of outdoor 3D reconstruction, and provide high-quality point cloud output results; the method canbe used in any lighting conditions; It can overcome the shortcoming that the monocular method can not estimate the actual size of the object.

Description

technical field [0001] The present invention relates to a three-dimensional scene reconstruction method based on deep learning, in particular to a three-dimensional reconstruction method for estimating a scene depth map through a deep learning method, then optimizing the depth map through a multi-view method, and reconstructing a scene through a three-dimensional point cloud, belonging to depth field of learning and machine vision technology. Background technique [0002] In computer vision and computer graphics, 3D reconstruction is the process of capturing the shape and appearance of real objects. This process can be done by active or passive methods. [0003] Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) is considered to be an effective way to reconstruct the scene because it can estimate the pose of the camera and scene geometry in parallel, however, how to obtain the depth Graph is one of the core problems in solving 3D scene reconstructi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06T17/00
CPCG06T7/55G06T17/00G06T2207/10028G06T2207/20081G06T2207/20084Y02T10/40
Inventor 金福生赵钰秦勇
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products