Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Three-dimensional model reconstruction method based on grid deformation

A 3D model and grid technology, applied in the field of 3D reconstruction, can solve problems such as limited geometric prior and inability to accurately reconstruct geometric structures, and achieve high flexibility

Active Publication Date: 2019-07-16
WUHAN UNIV
View PDF6 Cites 30 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This kind of inferring the 3D model of an object from only one RGB image is undoubtedly very attractive, but the above methods are inherently limited by a single view
Because the geometric prior obtained from an image from only one angle is often too limited to accurately reconstruct the geometry of objects not seen in this image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional model reconstruction method based on grid deformation
  • Three-dimensional model reconstruction method based on grid deformation
  • Three-dimensional model reconstruction method based on grid deformation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only used to illustrate and explain the present invention, and are not intended to limit this invention.

[0033] The deep learning network of the present invention only takes images of several perspectives as input, and outputs a reconstructed three-dimensional mesh model. picture figure 1As shown in , the basic module of the network of the present invention mainly includes two parts: (1) a grid deformation module based on a graph convolutional neural network, and (2) a discrete-view image feature fusion module. With the continuous learning process of the network, the mesh deformation module based on graph convolution gradually deforms the initial 3D shape mesh ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a three-dimensional model reconstruction method based on grid deformation, which is used for constructing a training sample set and comprises the following steps of: manufacturing discrete view angle pictures of a plurality of models and corresponding three-dimensional point cloud data; setting a deep learning network model based on the graph convolutional neural network, wherein the deep learning network model based on the graph convolutional neural network comprises a discrete view angle feature fusion module and a grid deformation module, and the output of the discrete view angle feature fusion module is connected with the input of the grid deformation module; setting a loss function, and based on the training sample set, training a deep learning network model based on the graph convolutional neural network; and inputting the discrete view angle picture of the to-be-reconstructed object to the trained network model, performing automatic reconstruction of the three-dimensional grid model, and evaluating the precision. According to the method, the discrete view angle picture and the three-dimensional point cloud data set of the object are learned and trained, and automatic three-dimensional grid model reconstruction can be carried out on objects of different types and sizes stably and accurately.

Description

technical field [0001] The invention belongs to the field of three-dimensional reconstruction, in particular to a three-dimensional model reconstruction method based on grid deformation. Background technique [0002] 3D reconstruction is an extremely challenging problem that has been studied in the field of computer vision for decades. Traditional methods based on multi-view geometry, such as many SFM and SLAM algorithms, involve a series of complex processes, including feature extraction, feature matching, matching point triangulation, etc. Therefore, this method is not robust to challenging scenarios where feature extraction or matching cannot be done efficiently. Furthermore, their 3D reconstruction result is usually a sparse reconstruction that cannot be used directly. To overcome these limitations, several learning-based methods have emerged. Using deep visual image features, depth regression estimation is performed on the scene to obtain dense reconstruction results...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/20G06N3/04
CPCG06T17/205G06N3/045
Inventor 姚剑潘涛陈凯涂静
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products