Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A angle-image multi-stage neural network based 3D reconstruction method

A neural network and three-dimensional reconstruction technology, applied in the field of computer vision, can solve problems such as difficulty in fully mining image visual clues, large differences in three-dimensional shapes and shapes, and limited learning ability of a single neural network.

Active Publication Date: 2019-02-26
NANJING UNIV
View PDF4 Cites 43 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Compared with previous works, the 3D shape reconstructed by this method has improved a lot in terms of effect, but there is still a phenomenon that the reconstructed 3D shape is very different from the shape of the object depicted in the original image.
The reason for this phenomenon is that these methods only use a single neural network (that is, a pair of encoder and decoder structure, referred to as the codec structure) for 3D reconstruction, and the learning ability of a single neural network is very limited, it is difficult to Fully mining the visual cues in the image, resulting in the learned 3D shape prior is not enough to make the shape of the reconstruction result highly consistent with the original image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A angle-image multi-stage neural network based 3D reconstruction method
  • A angle-image multi-stage neural network based 3D reconstruction method
  • A angle-image multi-stage neural network based 3D reconstruction method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0148] In this example, if figure 2 Shown is the input image to be reconstructed, through the three-dimensional reconstruction method of the present invention, the three-dimensional shape of the object in the picture can be reconstructed. The specific implementation process is as follows:

[0149] Through steps 1 to 4, the present invention obtains a trained point cloud generation network model and a point cloud refinement network model, the former is used to generate initial point clouds, and the latter is used to generate fine point clouds.

[0150] In step five, the user inputs an image containing the chair object to be reconstructed, such as figure 2 shown. The image is input into the point cloud generation network model, and is encoded into the image information feature matrix by the image encoder composed of the deep residual network. This feature matrix is ​​then fed into the primary decoder, where the deconvolution branch of the decoder maps the feature matrix int...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a single-image three-dimensional reconstruction method based on a multi-stage neural network. The three-dimensional shapes in the existing three-dimensional shape set are rendered from multiple angles to obtain a training image set, and the training point cloud is obtained at the surface acquisition points thereof. A point cloud generation network is constructed, an image encoder is constructed by using depth residual network to extract image information, and a dual-branch primary decoder is constructed by using deconvolution network and full-connection network to generate initial point cloud. A point cloud refinement network is constructed, a point cloud encoder is constructed using a posture transformation network, a multilayer perceptron and a maximum pool function, an image encoder is constructed using a depth residual network, and an image is constructed using a full connection layer. A point cloud coupler and an advanced decoder generate a fine point cloud; Training the point cloud generation network and pre-training and fine-tuning the point cloud refinement network; The input image is reconstructed by using the trained model to obtain the 3D point cloud, and the surface mesh is reconstructed to generate the 3D shape represented by polygonal mesh.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a single-image three-dimensional reconstruction method based on a multi-stage neural network. Background technique [0002] 3D reconstruction oriented to a single image is to restore the 3D shape of the object contained in the image from a single image using specific techniques. However, this task is a pathological problem, because the information that a single image can provide is extremely limited, so it needs strong prior information to complete. [0003] In fact, the academic community has proposed many related technologies and methods to solve the problem of 3D reconstruction for a single image. Among them, the reconstruction method based on visual cues is to perform three-dimensional reconstruction on a single image according to certain knowledge or theory that human beings have summarized in computer vision. Such as literature 1: Bichsel, Martin, and ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/00G06T19/20G06T15/50
CPCG06T15/506G06T17/00G06T19/20G06T2219/2016G06T2219/2004
Inventor 孙正兴胡安琦王梓轩刘川
Owner NANJING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products