Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional reconstruction method based on deep learning

A 3D reconstruction and deep learning technology, applied in the field of 3D reconstruction based on deep learning, to achieve high-precision 3D reconstruction, avoid the accumulation of multi-link errors, and avoid the effects of camera calibration

Active Publication Date: 2019-07-09
BEIJING UNIV OF TECH
View PDF3 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to overcome the defects of the prior art, the technical problem to be solved by the present invention is to provide a 3D reconstruction method based on deep learning, which does not require manual design of complex feature algorithms, can avoid complex camera calibration and fine process design, and With the ability to expand "knowledge" and reconstruct "unknown" by learning "what you see", it can make up for the inherent defects of the traditional reconstruction method "knowing is what you see", so that it can not only convert the input depth information with high fidelity , can also accurately predict the missing part of the object, thereby achieving high-precision 3D reconstruction

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional reconstruction method based on deep learning
  • Three-dimensional reconstruction method based on deep learning
  • Three-dimensional reconstruction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0013] like image 3 As shown, this 3D reconstruction method based on deep learning includes the following steps:

[0014] (1) Reconstruct the complete 3D shape of the target from the constrained latent vector of the input image, learn the mapping between the partial and complete 3D shapes, and then realize the 3D reconstruction of a single depth image;

[0015] (2) Learning an intermediate feature representation between the 3D real object and the reconstructed object to obtain the target latent variable in step (1);

[0016] (3) Transform the voxel floating value predicted in step (1) into a binary value by using an extreme learning machine to complete high-precision reconstruction.

[0017] The present invention uses the deep neural network to perform high-performance feature extraction, avoiding the accumulation of multi-link errors in manual design; the input image is constrained by learning the potential information of the three-dimensional shape, so that the missing par...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional reconstruction method based on deep learning, and the method comprises the steps: (1), reconstructing a complete three-dimensional shape of a target througha constrained potential vector of an input image, carrying out the mapping between a learning part and the complete three-dimensional shape, and achieving the three-dimensional reconstruction of a single depth image; (2) learning intermediate feature representation between the three-dimensional real object and the reconstruction object so as to obtain a target potential variable in the step (1);and (3) converting the predicted voxel floating value in the step (1) into a binary value by using an extreme learning machine to complete high-precision reconstruction.

Description

technical field [0001] The present invention relates to the technical field of computer vision and three-dimensional reconstruction, in particular to a three-dimensional reconstruction method based on deep learning. Background technique [0002] Vision-based 3D reconstruction is a computational process and technique for recovering the 3D information (shape, texture, etc.) of an object from images acquired by a visual sensor. Accurate 3D reconstruction is crucial for many applications, such as restoration of cultural relics, robotic grasping and automatic obstacle avoidance, etc. The current traditional 3D reconstruction methods have certain limitations, including: the need for precisely calibrated cameras and high-quality visual imaging components; the reconstruction process includes multiple steps such as image preprocessing, point cloud registration, and data fusion, which can easily lead to error accumulation. Reduce the reconstruction accuracy; and it is difficult to re...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06N3/04
CPCG06T17/00G06N3/048G06N3/045G06N3/088G06T7/50G06T2207/20084G06V20/653G06V10/82G06N3/047Y02T10/40G06T17/20
Inventor 孔德慧刘彩霞王少帆李敬华王立春
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products