Panorama-based self-supervised learning scene point cloud completion data set generation method

A supervised learning and point cloud completion technology, applied in the field of 3D reconstruction, can solve problems such as difficult to reconstruct real point cloud scenes, difficult to obtain data, and lack of integrity

Active Publication Date: 2021-12-17
DALIAN UNIV OF TECH
4 Cites 1 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0003] However, there are currently two key problems in the scene-level point cloud reconstruction method. First, in more complex scenes, it is difficult for the robot to move flexibly, the acquisition of multiple perspectives is time-consuming and laborious, and the global scene reconstruction effect is even more difficult to guarantee.
Second, in an open environment, there are various types of indoor scenes, and it is diffi...
View more

Abstract

The invention belongs to the technical field of three-dimensional reconstruction in the field of computer vision, and provides a panorama-based self-supervised learning scene point cloud completion data set generation method. A panoramic RGB image, a panoramic depth map and a panoramic normal map under the same viewpoint are used as input, and paired incomplete point clouds and target point clouds with RGB information and normal information can be generated to construct a self-supervised learning data set of a training ground scenic spot cloud complementation network. The key point of the invention is the processing of the stripe problem and the point-to-point shielding problem in the shielding prediction based on viewpoint conversion and the equirectangular projection and conversion process. According to the method, the acquisition mode of real scene point cloud data is simplified; a shielding prediction idea of viewpoint conversion; and a viewpoint selection strategy is designed.

Application Domain

3D modelling

Technology Topic

Depth mapData set +6

Image

  • Panorama-based self-supervised learning scene point cloud completion data set generation method
  • Panorama-based self-supervised learning scene point cloud completion data set generation method
  • Panorama-based self-supervised learning scene point cloud completion data set generation method

Examples

  • Experimental program(1)

Example Embodiment

[0042] The specific implementation manners of the present invention will be further described below in conjunction with the drawings and technical solutions.
[0043] The present invention is based on the 2D-3D-Semantics dataset released by Stanford University. This dataset involves 6 large indoor areas originating from 3 different buildings mainly education and office. The data set contains a total of 1413 equirectangular panoramic RGB images, as well as corresponding depth maps, surface normal maps, semantic annotation maps, and camera metadata, which are sufficient to support the self-supervised learning scene point cloud compensation based on panoramic images proposed by the present invention. A complete dataset generation method. In addition, other equirectangular panoramas taken or collected are also applicable to the present invention.
[0044] The present invention includes four main modules, which are respectively 2D-3D rectangular projection module, viewpoint selection module, 3D-2D rectangular projection and point-to-point occlusion processing module, and 2D-3D rectangular projection and angle mask filtering module, such as figure 1 shown. First, the 2D-3D equirectangular projection module starts with v 1 Panoramic RGB image under viewpoint C 1 , panoramic depth map D 1 and the panoramic normal map N 1 As input, generate an initial point cloud P with RGB information 1 and the initial point cloud P with normal information 2. Second, the viewpoint selection module uses the initial point cloud P 1 As input, generate a new occlusion prediction viewpoint v 2. Third, the 3D-2D equirectangular projection and point-to-point occlusion processing module uses the initial point cloud P 1 , the initial point cloud P 2 and the viewpoint v 2 As input, generate v 2 Panoramic RGB image under viewpoint C 2 , panoramic depth map D 2 and the panoramic normal map N 2. Fourth, the 2D-3D equirectangular projection and angle mask filtering module uses v 2 Panoramic RGB image under viewpoint C 2 , panoramic depth map D 2 and the panoramic normal map N 2 As input, generate a defect cloud P with RGB information 3 , residual defect cloud P with normal information 4 , and an angle mask to filter the striped regions. Finally, the resulting residual defect cloud P 3 , residual defect cloud P 4 And the angle mask can be sent to the scene point cloud completion network, and finally the completed point cloud P is generated 5. Among them, residual defect cloud P 3 (input) and initial point cloud P 1 (intended target) or incomplete cloud P 4 (input) and initial point cloud P 2 (expected target) can be used as a self-supervised learning data pair.
[0045] An example visualization of an intermediate result would look like figure 2 shown, showing v 1 Panoramic RGB image under viewpoint C 1 , panoramic depth map D 1 and the panoramic normal map N 1 , the initial point cloud P 1 and the initial point cloud P 2; and v 2 Panoramic RGB image under viewpoint C 2 , panoramic depth map D 2 and the panoramic normal map N 2 , residual defect cloud P 3 and residual defect cloud P 4.
[0046] An example visualization of the angle mask looks like image 3 As shown in , it shows the angle mask generated by looking at the plane on the table from an approximately horizontal angle. It can be seen that the filtered area is striped, which can effectively solve the problem of point cloud incompleteness caused by the viewing angle.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products