A method and system for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network

A deep neural network and semantic mapping technology, which is applied in the field of visual SLAM semantic mapping based on the deep convolutional neural network, can solve the problems of difficult application of embedded systems, poor real-time calculation, false detection, etc.

Active Publication Date: 2019-04-02
EAST CHINA UNIV OF SCI & TECH
View PDF8 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] The purpose of semantic segmentation is to understand the scene and realize the precise segmentation between various objects. It can be used for automatic driving or robots to help identify objects and object relationships. For example, the DeepLab deep neural network structure proposed by GoogLe is currently widely used. In the field of semantic segmentation (L.-C.Chen, G.Papandreou, I.Kokkinos, K.Murphy, and A.L.Yuille. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.arXiv:1606.00915,2016 ). However, due to the poor real-time performance of the general semantic segmentation network calculation, it is difficult to apply in embedded systems
At the same time, semantic segmentation will also bring situations such as unobvious edge contour segmentation, false detection and missed detection.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A method and system for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network
  • A method and system for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network
  • A method and system for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0092] In order to describe the technical content of the present invention more clearly, further description will be given below in conjunction with specific embodiments.

[0093] The method for realizing the visual SLAM semantic mapping function based on the hole convolution deep neural network, wherein the method comprises the following steps:

[0094] (1) The embedded development processor obtains the color information and depth information of the current environment through the RGB-D camera;

[0095] (2) Obtain feature point matching pairs through the collected images, perform pose estimation, and obtain scene space point cloud data;

[0096] (2.1) Extract image feature points through visual SLAM technology, and perform feature matching to obtain feature point matching pairs;

[0097] (2.2) Solve the current pose of the camera through 3D point pairs;

[0098] (2.3) Perform more accurate pose estimation through the method of graph optimization Bundle Adjustment;

[0099]...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a method for realizing a visual SLAM semantic mapping function based on a cavity convolutional deep neural network. The method comprises the following steps of (1) using an embedded development processor to obtain the color information and the depth information of the current environment via a RGB-D camera; (2) obtaining a feature point matching pair through the collectedimage, carrying out pose estimation, and obtaining scene space point cloud data; (3) carrying out pixel-level semantic segmentation on the image by utilizing deep learning, and enabling spatial pointsto have semantic annotation information through mapping of an image coordinate system and a world coordinate system; (4) eliminating the errors caused by optimized semantic segmentation through manifold clustering; and (5) performing semantic mapping, and splicing the spatial point clouds to obtain a point cloud semantic map composed of dense discrete points. The invention also relates to a system for realizing the visual SLAM semantic mapping function based on the cavity convolutional deep neural network. With the adoption of the method and the system, the spatial network map has higher-level semantic information and better meets the use requirements in the real-time mapping process.

Description

technical field [0001] The present invention relates to the field of real-time positioning and mapping of unmanned systems, in particular to the field of semantic segmentation of image processing, and specifically refers to a method and system for realizing visual SLAM semantic mapping function based on a deep convolutional neural network. Background technique [0002] In recent years, unmanned systems have developed rapidly, and autonomous driving, robots, and drones are typical unmanned systems. Visual SLAM (Simultaneous Localization and Mapping, real-time positioning and mapping) system has been widely used in the positioning and path planning of unmanned systems, such as the ORB-SLAM proposed by Mur-Artal in 2015 (Mur-ArtalR, Montiel J M M, Tardós J D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System [J]. IEEE Transactions on Robotics, 2015, 31(5): 1147-116). The spatial network map established in the visual SLAM system only contains low-level information, such ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/11G06K9/62G06N3/04
CPCG06T7/11G06T2207/10016G06N3/045G06F18/23G06F18/22G06F18/2411
Inventor 朱煜黄俊健陈旭东郑兵兵倪光耀
Owner EAST CHINA UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products