Indoor environment 3D semantic map construction method based on point cloud deep learning

An indoor environment, semantic map technology, applied in image analysis, image enhancement, neural architecture, etc., can solve the problem of no semantic perception ability or complex and indirect semantic acquisition method, poor adaptability to dynamic scenes, non-topological map structure navigation and obstacle avoidance, etc. question

Pending Publication Date: 2020-10-20
ZHEJIANG UNIV OF TECH
View PDF2 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] In order to solve the problems of existing visual SLAM map construction methods, such as poor adaptability to dynamic scenes, lack of semantic awareness or complex and indirect semantic acquisition methods, the constructed point cloud map occupies a huge storage space, and the non-topological map structure cannot be directly used for navigation. For problems such as obstacle avoidance, the present invention provides a method for building a 3D semantic map of an indoor environment based on point cloud deep learning, the method comprising the following steps:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor environment 3D semantic map construction method based on point cloud deep learning
  • Indoor environment 3D semantic map construction method based on point cloud deep learning
  • Indoor environment 3D semantic map construction method based on point cloud deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0055] The present invention will be described in detail below in conjunction with the embodiments and accompanying drawings, but the present invention is not limited thereto.

[0056] The hardware environment required for the operation of the whole method system is Kinect2.0 depth sensor, CPU i7-8700k, and a server with GTX1080Ti GPU; the software environment is Ubuntu16.04Linux operating system, ROS robot development environment, ORB-SLAM2 open source framework, OpenCV open source vision library, and CUDA, cuDNN, Tensorflow and other deep learning environments for 3D object detection. In addition, there are necessary third-party dependent libraries such as DBoW2 visual dictionary library, map display library Pangolin, and graph optimization library g2o.

[0057] Such as figure 1 As shown, a method for building a 3D semantic map of an indoor environment based on point cloud deep learning, the main implementation includes the following steps:

[0058] (1) Use a depth camera ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an indoor environment 3D semantic map construction method based on point cloud deep learning. The indoor environment 3D semantic map construction method comprises the following four parts: (1) acquiring a color map and a depth map of an indoor environment by using a depth camera; (2) constructing a point cloud deep learning network to obtain 3D semantic information of an object in the environment; (3) detecting a dynamic object and removing dynamic feature points; (4) solving a camera motion realization visual odometer, and constructing and optimizing a local map; (5)constructing a target semantic library through the obtained 3D point cloud semantic information; and (6) performing semantic fusion on the local map according to the target semantic library, and constructing an octree semantic map. Compared with a conventional method, the method has the advantages that the feature points in the dynamic object mask are eliminated by combining the semantic categoryinformation, the influence of the dynamic object on positioning and mapping is effectively reduced, the adopted 3D semantic acquisition mode is more direct and efficient, and better positioning and mapping capacity and semantic perception effect are achieved.

Description

technical field [0001] The invention belongs to the application of computer vision technology in the field of mobile robots, and specifically relates to a method for building a 3D semantic map of an indoor environment based on point cloud deep learning. Background technique [0002] Simultaneous Localization and Mapping (SLAM) algorithm is the process of simultaneously realizing the sensor's own pose estimation and environment map construction based on sensor data information. key technologies. According to different types of sensors, SLAM is divided into two categories: laser-based SLAM and vision-based SLAM. Among them, the research on SLAM based on laser sensors is relatively early, and the theory and technology are relatively mature. However, high-precision laser sensors are expensive and cost-effective. . The SLAM scheme using cameras as sensors is called visual SLAM (Vision-based SLAM, VSLAM). Traditional visual SLAM focuses on geometric positioning and mapping, whi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/13G06T7/11G06N3/04G06K9/62G06F16/29
CPCG06T7/13G06T7/11G06F16/29G06T2207/20016G06T2207/10028G06N3/045G06F18/24
Inventor 朱威绳荣金陈璐瑶郑雅羽何德峰
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products