A 3D point cloud reconstruction method based on deep learning

A 3D point cloud and deep learning technology, applied in the field of 3D point cloud reconstruction based on deep learning, can solve problems such as rough 3D point cloud and sparseness, and achieve the effect of alleviating cross-view differences

Active Publication Date: 2022-07-19
TIANJIN UNIV
View PDF3 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Although the existing 3D model reconstruction methods based on deep learning can predict a reasonable 3D shape from a limited number of input views, these methods usually directly generate relatively sparse and rough 3D point clouds.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A 3D point cloud reconstruction method based on deep learning
  • A 3D point cloud reconstruction method based on deep learning
  • A 3D point cloud reconstruction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] In order to make the objectives, technical solutions and advantages of the present invention clearer, the embodiments of the present invention are further described in detail below.

[0033] An embodiment of the present invention provides a deep learning-based three-dimensional point cloud reconstruction method, see figure 1 , the method includes the following steps:

[0034] 1. Build a sparse point cloud reconstruction module

[0035] First, a sparse point cloud reconstruction module is constructed, which consists of multiple identical sparse point cloud reconstruction subnetworks. Each sparse point cloud reconstruction subnet includes: a feature encoder and a point cloud predictor.

[0036] (1) Feature encoder: a deep learning-based two-dimensional image feature extraction network VGG16 is used. The input of the VGG16 network is a picture taken or projected from a three-dimensional object from a certain perspective. The network is used to learn visual information f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a three-dimensional point cloud reconstruction method based on deep learning, which includes: a point cloud predictor predicts coordinate information of a three-dimensional point cloud according to an input potential feature representation; Representation is used as input to learn complementary features that combine information from other branches; by applying cross-view interaction units, each sparse point cloud reconstruction subnet captures cross-view complementary information, which is fed back to the point cloud predictor to generate sparse point clouds; A global-guided dense point cloud reconstruction module composed of a point cloud feature extraction subnet, a globally guided feature learning subnet and a generation layer, each point cloud feature extraction subnet is composed of a series of multi-layer perceptrons with shared weights. The described multilayer perceptron extracts point cloud features from the generated sparse point cloud; takes the chamfering distance loss as the geometric consistency constraint, and constructs the semantic consistency constraint to optimize the generation of dense point clouds.

Description

technical field [0001] The invention relates to the field of three-dimensional point cloud reconstruction, in particular to a three-dimensional point cloud reconstruction method based on deep learning. Background technique [0002] As one of the hot research tasks in the field of computer vision, the goal of 3D model reconstruction is to generate a real 3D model of the object contained in the 2D image by the information of the given 2D image. 3D models are represented in several ways, including: voxels, meshes, and 3D point clouds. As a typical representative of 3D model, 3D point cloud has been used in many fields such as autonomous driving and virtual reality. Therefore, the task of 3D point cloud reconstruction has attracted extensive attention of researchers. Furthermore, the quality of the generated point cloud models can significantly affect the performance of subsequent tasks, such as 3D model retrieval, classification, and segmentation. However, due to the irregul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/00G06T9/00G06N3/04G06N3/08
CPCG06T17/00G06T9/002G06N3/08G06N3/045
Inventor 雷建军宋嘉慧彭勃于增瑞
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products