Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A 3D reconstruction method of indoor scenes based on rgb-d images

A technology for RGB images and indoor scenes, applied in the field of computer vision, it can solve problems such as low reconstruction accuracy, incorrect segmentation, and inability to solve depth image holes, so as to improve the accuracy, suppress the accumulation of errors, and optimize the camera pose.

Inactive Publication Date: 2020-07-10
HUAZHONG UNIV OF SCI & TECH
View PDF8 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the defects of the prior art, the purpose of the present invention is to solve the technical problems of the prior art that the reconstruction accuracy is not high and cannot solve the mis-segmentation caused by the hole in the depth image

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A 3D reconstruction method of indoor scenes based on rgb-d images
  • A 3D reconstruction method of indoor scenes based on rgb-d images
  • A 3D reconstruction method of indoor scenes based on rgb-d images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention.

[0042] First, some terms used in the present invention are explained.

[0043] RGB-D image: including color image (RGB image) and depth image. Usually, color images and depth images are registered so that there is a one-to-one correspondence between pixels.

[0044] Depth image: Depth Image, depth image, hereinafter referred to as D image, is an image or image channel that contains information about the distance to the surface of the scene object of the viewpoint. Each of its pixel values ​​is the sensor's actual distance from the object.

[0045] 3D point cloud: Project each pixel of the depth ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional reconstruction method of indoor scenes based on RGB-D images, which uses semantic segmentation results to repair depth image holes, provides object outline and category information for three-dimensional reconstruction, and obtains the shape and appearance of objects according to prior knowledge, thereby Provide more accurate data for 3D reconstruction. 3D reconstruction provides 3D spatial information for semantic segmentation, and solves mis-segmentation caused by overlapping objects and being affected by illumination in 2D image segmentation. Using multi-level camera pose estimation, sparse feature matching provides a rough estimated pose, and then through dense geometric and photometric optimization methods, an accurate camera pose is obtained to provide a more accurate camera pose for the reconstruction model. In the reconstruction process, local optimization is performed on each frame, and a key frame mechanism is added at the same time to establish global optimization and closed-loop detection, and establish constraints on the spatial points corresponding to the key frame pixels, effectively suppressing error accumulation, further optimizing the camera pose, and improving reconstruction results. accuracy.

Description

technical field [0001] The invention belongs to the technical field of computer vision, and more specifically relates to a method for three-dimensional reconstruction of indoor scenes based on RGB-D images. Background technique [0002] The principle of the depth camera Kinect is that the infrared emitter emits infrared rays and irradiates the surface of the object to form random reflection speckles, which are then received by the depth sensor, and then the system chip operates to generate a depth image. For transparent materials and planes with missing textures, infrared rays cannot be reflected to form speckle or the effect is poor, resulting in a depth image with holes. At present, most research works use bilateral filtering method to preprocess the depth image simply. [0003] In the prior art, the 3D reconstruction based on RGB-D images mainly includes: Newcombe et al. directly calculate the 3D coordinates of the spatial points through the preprocessed depth image, and...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/50G06T7/70G06T17/00G06K9/62G06K9/46G06K9/34
CPCG06T7/50G06T7/70G06T17/00G06T2207/10028G06V10/267G06V10/44G06V10/757
Inventor 郭红星卢涛汤俊良熊豆孙伟平夏涛范晔斌
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products