Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A 3D Model Reconstruction Method Based on Mesh Deformation

A 3D model and grid technology, applied in the field of 3D reconstruction, can solve problems such as limited geometric prior and inability to accurately reconstruct geometric structures, and achieve high flexibility

Active Publication Date: 2022-04-15
WUHAN UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This kind of inferring the 3D model of an object from only one RGB image is undoubtedly very attractive, but the above methods are inherently limited by a single view
Because the geometric prior obtained from an image from only one angle is often too limited to accurately reconstruct the geometry of objects not seen in this image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A 3D Model Reconstruction Method Based on Mesh Deformation
  • A 3D Model Reconstruction Method Based on Mesh Deformation
  • A 3D Model Reconstruction Method Based on Mesh Deformation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to facilitate those of ordinary skill in the art to understand and implement the present invention, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the implementation examples described here are only for illustration and explanation of the present invention, and are not intended to limit this invention.

[0033] The deep learning network of the present invention only takes images of several perspectives as input, and outputs a reconstructed three-dimensional mesh model. picture figure 1As shown in , the basic module of the network of the present invention mainly includes two parts: (1) a grid deformation module based on a graph convolutional neural network, and (2) a discrete-view image feature fusion module. With the continuous learning process of the network, the mesh deformation module based on graph convolution gradually deforms the initial 3D shape ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention proposes a three-dimensional model reconstruction method based on grid deformation to construct a training sample set, including making discrete perspective pictures of multiple models and corresponding three-dimensional point cloud data; setting a deep learning network model based on a graph convolutional neural network , the deep learning network model based on the graph convolutional neural network includes a discrete perspective feature fusion module and a grid deformation module, the output connection of the discrete perspective feature fusion module is connected to the input of the grid deformation module; a loss function is set, based on training The sample set is used to train the deep learning network model based on the graph convolutional neural network; input the discrete perspective pictures of the object to be reconstructed into the trained network model, and automatically reconstruct the 3D mesh model and evaluate the accuracy. This method can support stable and accurate automatic 3D mesh model reconstruction for objects of different types and sizes by learning and training discrete perspective images and 3D point cloud data sets of objects.

Description

technical field [0001] The invention belongs to the field of three-dimensional reconstruction, in particular to a three-dimensional model reconstruction method based on grid deformation. Background technique [0002] 3D reconstruction is an extremely challenging problem that has been studied in the field of computer vision for decades. Traditional methods based on multi-view geometry, such as many SFM and SLAM algorithms, involve a series of complex processes, including feature extraction, feature matching, matching point triangulation, etc. Therefore, this method is not robust to challenging scenarios where feature extraction or matching cannot be done efficiently. Furthermore, their 3D reconstruction result is usually a sparse reconstruction that cannot be used directly. To overcome these limitations, several learning-based methods have emerged. Using deep visual image features, depth regression estimation is performed on the scene to obtain dense reconstruction results...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/20G06N3/04
CPCG06T17/205G06N3/045
Inventor 姚剑潘涛陈凯涂静
Owner WUHAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products