Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning

A technology of stereo vision and 3D scene, applied in the field of computer vision and 3D reconstruction, to achieve the effect of high-precision 3D scene reconstruction

Pending Publication Date: 2021-04-30
BEIJING UNIV OF TECH
View PDF7 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

To this end, the key technical issues that need to be solved include: using deep neural networks to extract and fuse high-performance multi-scale features, avoiding the accumulation of multi-link errors in manual design; Images, generally with similar depths, use the depth information of adjacent images to refine the predicted initial depth map

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning
  • Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning
  • Multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0018] The specific process of the present invention will be described in detail below:

[0019] 1. Multi-scale feature extraction and fusion

[0020] This part is mainly to extract the multi-scale features of the image and the aggregation of multi-scale feature bodies. Its innovation is to propose a multi-scale feature volume aggregation network, namely MFVA-Net (Multi-scale Feature Volume AggregationNet), which can learn context information in different scale feature volumes and enhance the ability of neural networks to predict depth. Further improve the accuracy and integrity of 3D reconstruction.

[0021] The multi-scale feature extraction and fusion part mainly consists of three stages: 1) multi-scale feature extraction; 2) construction of feature volume; 3) aggregation of multi-scale feature volume. Its frame as figure 2 .

[0022] 1) Multi-scale feature extraction

[0023] The input of the network is N RGB images with known camera parameters Will I 1 Remember as...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-view stereoscopic vision three-dimensional scene reconstruction method based on deep learning, and aims to solve the problem that the existing reconstruction method based on deep learning generates a 3D cost body by extracting the feature of the last layer of an image, and does not well utilize shallow features, so that information of different scales is lost. Moreover, according to those methods, during depth map refinement, only the effect of the reference image on depth refinement is considered, and the contribution of the depth of the adjacent image to depth map prediction is ignored. In order to solve the problem, a multi-scale feature extraction and fusion network and a depth map refinement network based on inter-frame correlation are provided to improve the prediction precision and integrity of a scene. Compared with an existing method based on deep learning, the method of the invention has the advantages that the context features of the input image can be learned better, the shielded and missing area of the target scene can be reconstructed, the three-dimensional information of the scene can be restored more completely, and high-precision three-dimensional scene reconstruction is achieved.

Description

technical field [0001] The invention belongs to the field of computer vision and three-dimensional reconstruction, and studies a new three-dimensional reconstruction method. Background technique [0002] High-precision 3D scene reconstruction is crucial for many applications, such as urban 3D maps, monument reconstruction, autonomous driving, and augmented reality, etc. The 3D reconstruction method based on multi-view stereo vision is also one of the core research issues of computer vision. Traditional multi-view stereo matching reconstruction methods use subjectively designed similarity measures and engineered regularizations (such as normalized cross-correlation and semi-global matching) to compute dense correspondences and recover 3D points. Although these methods show good reconstruction results in the ideal Lambertian (Lambert) situation, they also have some common limitations. For example, the presence of low-texture, high-gloss and specular regions of the scene make...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06T17/00G06N3/08G06V10/462G06N3/045G06F18/22G06F18/253
Inventor 孔德慧林瑞王少帆李敬华王立春
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products