Robust vision SLAM method based on deep learning in dynamic scene

A deep learning, dynamic scene technology, applied in image data processing, instrumentation, computing and other directions, can solve the problems of complex dynamic scenes, improve accuracy and robustness, reduce absolute trajectory error and relative pose error. Effect

Pending Publication Date: 2021-03-05
BEIJING UNIV OF TECH
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Due to the complexity of dynamic scenes, and the influence of factors such as incorrect correspondence or occlusion of tr

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robust vision SLAM method based on deep learning in dynamic scene
  • Robust vision SLAM method based on deep learning in dynamic scene
  • Robust vision SLAM method based on deep learning in dynamic scene

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] figure 1 For the flow chart of the inventive method, refer to figure 1 , the present invention provides a robust visual SLAM method based on deep learning in dynamic scenes. Four threads run in parallel in the system: tracking, semantic segmentation, local map, and loop closure detection. When the original RGB image arrives, the semantic segmentation thread and the tracking thread are simultaneously passed in, and the two process the image in parallel. The semantic segmentation thread uses the Mask R-CNN network to divide objects into dynamic objects and static objects, provides the pixel-level semantic labels of dynamic objects to the tracking thread, and further detects potential dynamic feature point outliers through the geometrically constrained motion consistency detection algorithm , and then remove the ORB feature points in the dynamic object and use the relatively stable static feature points for pose estimation. And by inserting key frames, deleting redundant...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a robust vision SLAM method based on deep learning in a dynamic scene, and belongs to the field of artificial intelligence and robot and computer vision. A camera is used as animage acquisition device. The method comprises the following steps: firstly, dividing an object in an image sequence acquired by a camera into a static object and a dynamic object by utilizing a MaskR-CNN semantic segmentation network based on deep learning, taking pixel-level semantic segmentation of the dynamic object as semantic priori knowledge, and removing feature points on the dynamic object; further checking whether the features are dynamic features or not by utilizing geometric constraints on polar geometric features; and forming a complete robust vision SLAM system by combining local mapping and a loop detection module. According to the method, the absolute trajectory error and the relative pose error of the SLAM system can be well reduced, and the accuracy and robustness of pose estimation of the SLAM system are improved.

Description

technical field [0001] The invention belongs to the fields of artificial intelligence, robot and computer vision, and in particular relates to a robust visual SLAM method based on deep learning in a dynamic scene. Background technique [0002] In recent years, simultaneous localization and mapping (SLAM) has become an important research field in artificial intelligence, robotics and computer vision. Localization and mapping in dynamic scenes is one of the popular research directions and is widely used in Indoor service robots, outdoor self-driving cars, etc. [0003] Most of the current visual SLAM methods are based on the assumption that the observation environment is static. Since the real environment contains dynamic objects, traditional SLAM methods are prone to insufficient feature matching due to incorrect correspondence or occlusion of tracking features, resulting in pose estimation. Drift or even loss, resulting in low accuracy and poor robustness of the system in d...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/10G06T7/246
CPCG06T2207/10016G06T2207/20081G06T2207/20084G06T7/10G06T7/246
Inventor 阮晓钢郭佩远黄静于乃功
Owner BEIJING UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products