Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation

A technology of semantic segmentation and semantic mapping, which is applied in the field of cross-integration of computer vision and deep learning, can solve problems such as affecting the quality of the built map, and achieve the effect of improving robustness and system performance.

Active Publication Date: 2020-07-28
EAST CHINA UNIV OF SCI & TECH
View PDF2 Cites 36 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0025] The purpose of the present invention is to provide a semantic mapping method based on visual SLAM and two-dimensional s...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
  • Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation
  • Semantic mapping method based on visual SLAM and two-dimensional semantic segmentation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0077] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be described in further detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the invention, not to limit the invention.

[0078] The concept of semantic map refers to a map containing rich semantic information, which represents the abstraction of semantic information such as spatial geometric relations and existing object types and positions in the environment. The semantic map contains both the environmental spatial information and the environmental semantic information, so that the mobile robot can know both the objects in the environment and what the objects are like a human.

[0079] Aiming at the problems and deficiencies in the prior art, the present invention proposes a semantic mapping system based on visual SLAM and two-dime...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the field of cross fusion of computer vision and deep learning, in particular to a semantic mapping method based on visual SLAM and two-dimensional semantic segmentation. Themethod comprises the following steps: S1, calibrating camera parameters, and correcting camera distortion; S2, acquiring an image frame sequence; S3, preprocessing the image; S4, judging whether the current image frame is a key frame or not, if so, turning to the step S6, and if not, turning to the step S5; S5, performing dynamic fuzzy compensation; s6, carrying out semantic segmentation, extracting ORB feature points for the image frames, and carrying out semantic segmentation by using a mask region convolutional neural network algorithm model; s7, pose calculation: utilizing a sparse SLAM algorithm model to calculate the pose of the camera; s8, using the semantic information for assisting in dense semantic map construction, and achieving three-dimensional semantic map construction of theglobal point cloud map. According to the invention, the performance of the unmanned aerial vehicle semantic mapping system can be improved, and the robustness of feature point extraction and matchingfor a dynamic scene is significantly improved.

Description

technical field [0001] The present invention relates to the field of cross fusion of computer vision and deep learning, more specifically, relates to a semantic mapping method based on visual SLAM and two-dimensional semantic segmentation. Background technique [0002] UAVs are generally composed of three modules: intelligent decision-making, environmental perception, and motion control, among which environmental perception is the basis of everything. [0003] To perceive the surrounding environment, drones need a stable and powerful sensor system to act as "eyes", and at the same time need corresponding algorithms and powerful processing units to "understand objects". [0004] In the environmental perception module of the UAV, the visual sensor is an indispensable part. The visual sensor can be a camera. Compared with lidar and millimeter-wave radar, the resolution of the camera is higher, and it can obtain enough environmental details. For example, it can Describe the app...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/11G06N3/04G06N3/08G06T7/50G06T7/80G06T7/90
CPCG06T7/11G06T7/80G06T7/90G06T7/50G06N3/08G06T2207/10024G06T2207/20084G06N3/045Y02T10/40
Inventor 唐漾钱锋杜文莉堵威
Owner EAST CHINA UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products