Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Context-aware multi-view three-dimensional reconstruction system and method based on deep learning

A 3D reconstruction and deep learning technology, applied in the deep learning-based context-aware multi-view 3D reconstruction system and its field, can solve problems such as inability to make full use of input images, inconsistent 3D shapes, and inability to process in parallel, achieving high consistency, The effect of shape optimization stabilization and faster reconstruction

Pending Publication Date: 2021-03-12
JIANGSU UNIV OF SCI & TECH
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, RNN-based methods suffer from three limitations: first, when given the same set of images with different orders, the 3D shape of the object reconstructed by RNN may be inconsistent; second, due to the long-term memory loss of RNN ), cannot fully utilize the input image to optimize the reconstruction result; finally, the RNN-based method is time-consuming because the input image is processed sequentially and cannot be processed in parallel

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Context-aware multi-view three-dimensional reconstruction system and method based on deep learning
  • Context-aware multi-view three-dimensional reconstruction system and method based on deep learning
  • Context-aware multi-view three-dimensional reconstruction system and method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] A deep learning-based context-based context Figure three Dimensional redding system and methods, including encoders, decoders, context fusion modules, refiners, and corresponding network loss functions. The encoder generates n feature map according to the N input images; the decoder is used as input, and the N initial three-dimensional shape is reconstructed; the context fusion module uses the initial three-dimensional shape as input, and adaptively selects each The initial three-dimensional shape of the mass is fused to obtain a three-dimensional shape of the fusion; the seminulator will fuse the three-dimensional shape as input, further correct the reconstructed error portion, and then reconstruct the final three-dimensional shape. The present invention provides a technique for rapid reconstructing objects in three-dimensional shapes in a view that is visually seen from a plurality of angles.

[0040] The present invention will be further described in detail below with r...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a context-aware multi-view three-dimensional reconstruction system and method based on deep learning. The system comprises an encoder, a decoder, a context fusion module, a refiner and a network loss function. The encoder generates n feature maps according to the n input images; the decoder reconstructs n initial three-dimensional shapes by taking each feature map as input;the context fusion module takes the initial three-dimensional shape as an input and adaptively selects a reconstruction part with relatively high quality of each initial three-dimensional shape for fusion to obtain a fused three-dimensional shape; and the refiner takes the fused three-dimensional shape as input to further correct a reconstructed error part, so that a final three-dimensional shapeis reconstructed. The invention provides a non-contact, simple and convenient technology for quickly reconstructing the three-dimensional shape of an object from views shot from multiple angles, andthe method has higher system robustness and reconstruction precision.

Description

Technical field [0001] The present invention belongs to the technical field of three-dimensional shape based on multi-view reconstruction objects, and involves a context-sensitive manager based on deep learning. Figure three Dimensional redding system and its methods. [0002] technical background [0003] Traditional three-dimensional reconstruction method, such as: multi-view reconstruction, Structure Rommotion: SFM), synchronous positioning and construction and mapping: SLAM, etc., cross-view image feature matches, multi-view geometric constraints Solved by technology. However, when the interval between multiple views is large, feature matching is very difficult due to a variation of appearance or self-obstruction. To overcome these limits, many deep learning-based methods have been developed to rebuild the three-dimensional shape of the object, including 3D-R2N2 [Choy et al, 3D-R2N2: a unifiedapproach for single and multi-view 3D object reconstruction.eccv 2016.], LSM [Kar et...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/20G06N3/08G06N3/04
CPCG06T17/20G06N3/08G06N3/045
Inventor 白素琴史金龙乔亚茹钱强茅凌波束鑫欧镇田朝晖
Owner JIANGSU UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products