Point cloud three-dimensional reconstruction method based on RGB data and generative adversarial network

A technology of three-dimensional reconstruction and RGB image, applied in the research field of point cloud data processing, can solve the problems of complex operation and rough model of image sequence point cloud generation and three-dimensional reconstruction method, and achieves convenient processing, low hardware equipment requirements, and convenient data collection. Effect

Active Publication Date: 2020-11-06
NORTHWESTERN POLYTECHNICAL UNIV
View PDF7 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, this method uses image sequence point cloud generation and 3D reconstru

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Point cloud three-dimensional reconstruction method based on RGB data and generative adversarial network
  • Point cloud three-dimensional reconstruction method based on RGB data and generative adversarial network
  • Point cloud three-dimensional reconstruction method based on RGB data and generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0038] Now in conjunction with embodiment, accompanying drawing, the present invention will be further described:

[0039] Embodiment technical scheme

[0040] Step 1: Create a depth image via a generative network.

[0041] To convert RGB images into corresponding depth images, the generation network part of the GAN network uses a modified pix2pixHD, which enables it to create high-quality synthetic depth images from RGB images and reduces computer hardware requirements.

[0042]Use a single global generator for the pix2pixHD model. where the generator G consists of three components: a convolutional front-end, a set of residual blocks and a transposed convolutional back-end. The discriminator D is broken down into two sub-discriminators D1 and D2. The discriminator D1 processes the full-resolution synthetic images generated by the generator, while D2 processes the half-scale synthetic images. Therefore, discriminator D1 provides a global view of the depth image to guide ge...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a point cloud three-dimensional reconstruction method based on RGB data and a generative adversarial network. In order to solve the problems of complex point cloud data acquisition, high price and complex operation of a three-dimensional reconstruction technology in the background technology, the invention designs a method for generating point cloud data by a single RGB image, and completes deep learning point cloud three-dimensional reconstruction. First, a two-dimensional image is captured using an inexpensive ordinary camera, and a depth image estimate is generatedfrom a single RGB image over a generative adversarial network. And depth image estimation is generated by generating parameters of the training data depth camera, and three-dimensional point cloud data is obtained according to depth calculation. A rough surface model is obtained through spherical surface mapping, and finally a discriminator is used for discriminating the model to obtain a completethree-dimensional model.

Description

technical field [0001] The invention belongs to the research field of point cloud data processing, and relates to a point cloud three-dimensional reconstruction method based on RGB data and a generative confrontation network, mainly involving technologies such as point cloud data generation, deep learning, generative confrontation network (GAN), and three-dimensional reconstruction. Background technique [0002] In recent years, with the development of artificial intelligence technology, 3D reconstruction technology has been widely used in all aspects of life, such as: face recognition, reconstruction of large cultural relics, geographic mapping, automatic driving, laser SLAM, etc. Acquisition of point cloud data is usually the most critical step in 3D reconstruction. [0003] The acquisition of traditional point cloud data is usually obtained from laser scanners, but sometimes the cost is prohibitive. Therefore, it is of great practical significance to study the use of che...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T17/00G06N3/04G06N3/08G06T7/50
CPCG06T17/00G06T7/50G06N3/08G06N3/045
Inventor 沈扬吴亚锋唐铭阳
Owner NORTHWESTERN POLYTECHNICAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products