Indoor scene three-dimensional point cloud reconstruction method and system based on depth information fusion

A 3D point cloud and indoor scene technology, applied in the field of 3D point cloud reconstruction of indoor scenes, can solve problems such as not considering the pixel depth value, achieve the effect of improving accuracy, enhancing robustness, and controlling the amount of calculation

Pending Publication Date: 2021-06-22
NORTHWEST A & F UNIV
View PDF0 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the traditional visual odometry uses feature point methods such as ORB, SIFT, SURF or their variants. In the process of feature extraction, char

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor scene three-dimensional point cloud reconstruction method and system based on depth information fusion
  • Indoor scene three-dimensional point cloud reconstruction method and system based on depth information fusion
  • Indoor scene three-dimensional point cloud reconstruction method and system based on depth information fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0054] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0055] see figure 1 , an embodiment of the present invention provides a method for reconstructing a 3D point cloud of an indoor scene based on depth information fusion, the method specifically includes:

[0056] Step 1: The user installs the driver of KinectV2 in the ROS environment, and uses libfreenect2 and iai-kinect2 to drive KinectV2 to ensure the normal operation of the device.

[0057] Step 2: Use the checkerboard calibration method to calibrate the para...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an indoor scene three-dimensional point cloud reconstruction method based on depth information fusion. The method comprises the following steps: collecting a color image and a depth image of a scene; fusing the color image and the depth image to perform feature extraction and matching; determining the pose of the depth camera and screening out a key frame; generating a three-dimensional point cloud of each key frame according to the key frames; and splicing the three-dimensional point cloud of each key frame frame by frame according to the pose of the depth camera to obtain a globally consistent scene three-dimensional point cloud model. According to the method, whether a point is a feature point is judged by using image brightness information and depth information at the same time in feature extraction, so that the extracted feature points have high representativeness. Meanwhile, homogenization operation is performed on the extracted feature points, it is ensured that the number of the feature points is within a reasonable range, the calculated amount of a system is effectively controlled. The system reliability is improved. The accuracy of the front end is improved, the robustness of the front end is enhanced, and the result of back-end optimization is more accurate or the number of iterations of back-end optimization is reduced.

Description

technical field [0001] The present invention relates to the technical fields of computer graphics, computer vision, and robotics, and more specifically, to a method and system for reconstructing three-dimensional point clouds of indoor scenes based on depth information fusion. Background technique [0002] In recent years, the problem of simultaneous localization and mapping (SLAM) has once again become a hot topic in the field of computer vision and robotics research. SLAM can be divided into two parts for discussion, that is, positioning and mapping. Positioning refers to the description of the robot's own position and posture, while mapping is the description of the surrounding environment information of the robot. When the robot knows its own position and the surrounding environment Once information is available, robots can help humans complete many tasks. With the rapid development of computer vision technology, the cost of expensive products such as cameras and laser ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/55G06T7/80G06K9/62
CPCG06T7/55G06T7/80G06T2207/10016G06T2207/10024G06T2207/10028G06T2207/30208G06V10/757
Inventor 杨会君许泽东肖悦秦玉龙卫志豪刘东风张义沈求峰于启瑞
Owner NORTHWEST A & F UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products