Three-dimensional reconstruction method based on convolution neural network

A convolutional neural network and three-dimensional reconstruction technology, applied in the field of three-dimensional scene reconstruction based on convolutional neural network, can solve the problems of inaccurate feature extraction, long time consumption, poor abstraction ability, etc., and achieve good real-time performance and retrieval accuracy. High and efficient effect

Active Publication Date: 2019-03-01
DALIAN UNIV OF TECH
View PDF2 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] Patent In order to solve the technical bottlenecks of inaccurate, incomplete, poor abstraction ability and long time-consuming technical bottlene

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional reconstruction method based on convolution neural network
  • Three-dimensional reconstruction method based on convolution neural network
  • Three-dimensional reconstruction method based on convolution neural network

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0035] The specific implementation manners of the present invention will be further described below in conjunction with the accompanying drawings and technical solutions.

[0036] The 3D reconstruction method based on convolutional neural network is realized through three modules, and the steps are as follows:

[0037] (1) Preprocessing module

[0038] (1.1) Module input: use the RGBD camera to collect information about indoor objects and complete the establishment of a 3D scene model; segment the objects with poor quality in the scene after modeling, and use the scanned objects and database objects as the input of the preprocessing module;

[0039] (1.2) Extract surface information: use virtual scanning technology to sample the dense area of ​​point cloud, select the point with the largest change in normal vector direction among the sampling points as the feature point, and use the normal vector and curvature information of the feature point as the point cloud area The under...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a three-dimensional reconstruction method based on a convolution neural network, belonging to the technical field of computer vision. Three-dimensional features are more accurate and retrieval accuracy is higher. Compared with the popular feature extraction network, the network learning ability of the method is stronger, and the feature information of the object point cloudextracted through the network is richer. The algorithm has good real-time performance, whether modeling, feature extraction, database retrieval or final model registration can be completed in a shorttime. Moreover, the accuracy of the network model proposed in this method is better than that of other models based on depth learning, which shows that the network structure can learn the data distribution rules directly from the three-dimensional point cloud. Compared with the traditional feature extraction method, the proposed method based on convolution neural network can significantly reduce the computational time, and the Euclidean distance retrieval algorithm can also improve the retrieval efficiency.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and in particular relates to a method for reconstructing a three-dimensional scene based on a convolutional neural network. Background technique [0002] Scene modeling has always been a research hotspot in the field of computer vision. High-precision 3D scene modeling is the prerequisite for the realization of technologies such as robot perception and virtual reality. 3D reconstruction generally includes three parts. First, use a hand-held camera to scan the target to be reconstructed from multiple perspectives, then perform feature extraction, matching, and camera pose estimation on the scanned multi-frame images, and finally complete the two-dimensional reconstruction through stereo vision technology. The mapping of pixels to three-dimensional coordinate points obtains the final reconstructed model. However, in the previous work, there were objective conditions such as mutual occlusio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T17/00G06N3/04
CPCG06T17/00G06N3/045
Inventor 王诚斌杨鑫尹宝才魏小鹏张强
Owner DALIAN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products