Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robot indoor environment three-dimensional semantic map construction method based on deep learning

A semantic map and indoor environment technology, applied in the direction of instruments, image analysis, image enhancement, etc., can solve the problems of reducing the efficiency of mapping, large amount of calculation, and high requirements for computing power, so as to improve the efficiency of mapping, reduce the amount of data, The effect of simplifying the build process

Inactive Publication Date: 2020-05-19
NANJING UNIV OF SCI & TECH
View PDF1 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method needs to construct the geometric map of the environment first, then segment the color image to obtain semantic information, and then fuse the semantic information and geometric information to obtain the semantic map of the environment, which requires a large amount of calculation and requires high computing power of the computer. Not conducive to deployment on mobile robots
Chinese patent CN104732587B discloses a depth sensor-based indoor 3D semantic map construction method. In the method proposed, each frame of color images collected needs to be semantically segmented. However, it contains a large number of redundant images, which reduces the construction efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot indoor environment three-dimensional semantic map construction method based on deep learning
  • Robot indoor environment three-dimensional semantic map construction method based on deep learning
  • Robot indoor environment three-dimensional semantic map construction method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0019] The present invention will be further introduced below in conjunction with the accompanying drawings and specific embodiments.

[0020] combine figure 1 , a method for building a three-dimensional semantic map of a robot's indoor environment based on deep learning of the present invention, comprising the following steps:

[0021] Step 1: Collect the RGB image sequence and the depth image sequence of the indoor environment through the depth camera.

[0022] The specific implementation steps are as follows: the user continuously shoots the environment indoors by holding the depth camera or carrying the depth camera on the robot to obtain continuous RGB image sequences and depth image sequences.

[0023] Step 2: Perform ORB feature extraction and matching on each frame of RGB image collected, and determine the key frame.

[0024] The specific implementation steps are:

[0025] Step 21: Detect the Oriented FAST corner position of each frame image, and calculate the BRIEF...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a robot indoor environment three-dimensional semantic map construction method based on deep learning. The method comprises the following steps: firstly, an RGB image sequence and a depth image sequence of an indoor environment are collected through a depth camera; ORB feature extraction and matching are carried out on each collected frame of RGB image, and a key frame is determined; the extracted and matched feature point pairs are used to calculate a pose transformation matrix T between two adjacent frames of images through an ICP algorithm; semantic segmentation is performed on the determined key frame by using a trained deep learning network to obtain images of the key frames classified according to pixels; and point cloud splicing is performed on the segmented key frame images in combination with the calculated transformation matrix T and the depth images corresponding to the key frames to obtain a semantic map which can be understood by the robot. Semantic map construction can be carried out by directly utilizing the segmented key frame images, and semantic fusion does not need to be carried out after the environmental geometric map is established.

Description

technical field [0001] The invention belongs to the technical field of mobile robot visual environment perception, in particular to a method for constructing a three-dimensional semantic map of a robot's indoor environment based on deep learning. Background technique [0002] Home service robots generally have the following three core technologies: environmental perception, human-computer interaction, and motion control. The perception and understanding of the environment is undoubtedly a core technology for indoor mobile robots to perform tasks. The traditional method of obtaining information about the surrounding environment of the robot is to establish an indoor environment map through 2D laser SLAM (Simulntenous Localization And Mapping, simultaneous positioning and mapping) technology, which has great limitations. First, the map created by lidar is two-dimensional and lacks three-dimensional spatial information. When navigating and avoiding obstacles, it can only avoid...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T17/05G06T7/33G06T7/10
CPCG06T7/10G06T7/337G06T17/05G06T2207/10016G06T2207/10024G06T2207/10028
Inventor 王永娟徐少杰曹雏清
Owner NANJING UNIV OF SCI & TECH
Features
  • Generate Ideas
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More