Three-dimensional point cloud reconstruction method based on deep learning

A 3D point cloud and deep learning technology, which is applied in the field of 3D point cloud reconstruction based on deep learning, can solve the problems of rough 3D point cloud and sparseness, and achieve the effect of alleviating cross-view differences.

Active Publication Date: 2021-07-09
TIANJIN UNIV
View PDF7 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although the existing 3D model reconstruction methods based on deep learning can predict a reasonable 3D shape fr

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional point cloud reconstruction method based on deep learning
  • Three-dimensional point cloud reconstruction method based on deep learning
  • Three-dimensional point cloud reconstruction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the purpose, technical solution and advantages of the present invention clearer, the implementation manners of the present invention will be further described in detail below.

[0033] The embodiment of the present invention provides a method for 3D point cloud reconstruction based on deep learning, see figure 1 , the method includes the following steps:

[0034] 1. Build a sparse point cloud reconstruction module

[0035] First, a sparse point cloud reconstruction module is constructed, which consists of multiple identical sparse point cloud reconstruction subnetworks. Each sparse point cloud reconstruction subnetwork consists of: a feature encoder and a point cloud predictor.

[0036] (1) Feature encoder: A two-dimensional image feature extraction network VGG16 based on deep learning is used. The input of the VGG16 network is a picture taken or projected from a certain perspective of a three-dimensional object. The network is used to learn visual i...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a three-dimensional point cloud reconstruction method based on deep learning; the method comprises the steps that: the coordinate information of a three-dimensional point cloud is predicted through a point cloud predictor according to the inputted potential feature representation, wherein each branch takes potential feature representation output by the corresponding feature encoder as input, and learns complementary features combined with other branch information; by applying a cross-view interaction unit, each sparse point cloud reconstruction subnet captures cross-view complementary information and feeds back the information to the point cloud predictor to generate sparse point clouds; a global guidance dense point cloud reconstruction module composed of a plurality of point cloud feature extraction subnets, a global guidance feature learning subnet and a generation layer is constructed, each point cloud feature extraction subnet is composed of a series of multi-layer perceptron sharing weights, and the multi-layer perceptron extracts point cloud features from generated sparse point clouds; finally, chamfer distance loss is adopted as geometric consistency constraint, and semantic consistency constraint is constructed to optimize generation of dense point clouds.

Description

technical field [0001] The invention relates to the field of three-dimensional point cloud reconstruction, in particular to a three-dimensional point cloud reconstruction method based on deep learning. Background technique [0002] As one of the hot research tasks in the field of computer vision, the goal of 3D model reconstruction is to generate a real 3D model of the object contained in the 2D image through the information of the given 2D image. 3D models can be represented in a variety of ways, including: voxels, meshes, and 3D point clouds. As a typical representative of 3D models, 3D point cloud has been applied in many fields such as autonomous driving and virtual reality. Therefore, the task of 3D point cloud reconstruction has attracted extensive attention of researchers. Moreover, the quality of the generated point cloud model can significantly affect the performance of subsequent tasks, such as 3D model retrieval, classification, and segmentation, etc. However, ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T17/00G06T9/00G06N3/04G06N3/08
CPCG06T17/00G06T9/002G06N3/08G06N3/045
Inventor 雷建军宋嘉慧彭勃于增瑞
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products