Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robust vision SLAM method based on semantic prior and deep learning features

A deep learning and semantic technology, applied in character and pattern recognition, instruments, manipulators, etc., can solve problems such as insufficient indoor navigation, reducing the accuracy of visual SLAM algorithm's own pose estimation, and visual SLAM framework not considering the impact of dynamic objects. , to achieve the effect of simple principle

Active Publication Date: 2020-10-23
BEIHANG UNIV
View PDF4 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] Regardless of whether it is an indoor or outdoor environment, the existence of dynamic objects is undoubtedly one of the major challenges to the positioning accuracy and robustness of visual SLAM. The mainstream visual SLAM framework does not consider the impact of dynamic objects, assuming that the surrounding environment is static, and The changes in the acquired images are all caused by the movement of the camera, which brings challenges to the SLAM algorithm for data association between different frames
For example, the movement of furniture in the home and the walking of people in the office. If these dynamic objects are constructed as part of the environment, on the one hand, the estimation accuracy of the visual SLAM algorithm's own pose will be reduced.
On the other hand, simply using geometric information to construct maps for dynamic scenes, the static maps constructed are not enough for indoor navigation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robust vision SLAM method based on semantic prior and deep learning features
  • Robust vision SLAM method based on semantic prior and deep learning features
  • Robust vision SLAM method based on semantic prior and deep learning features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] Such as figure 1 Shown, the concrete realization steps of the present invention are as follows:

[0032] Step 1. Build a visual SLAM framework based on deep learning feature extraction, and initially achieve more robust visual positioning performance in weak texture and dynamic scenes.

[0033] The current classic visual SLAM framework, the extracted features are all artificially designed features, and the ORB-SLAM framework is the main one, and the ORB features are extracted. With the continuous development of deep learning, feature extraction methods based on deep learning have received extensive attention. The image features extracted by deep learning can express image information more fully and have stronger robustness to environmental changes such as illumination. In addition, the feature extraction method based on deep learning can obtain multi-level image features, combining low-level features (such as pixel-level grayscale features) and high-level features (su...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a robust vision SLAM method based on semantic prior and deep learning features. The method comprises the following steps: (1), building a vision SLAM framework based on deep learning feature extraction, enabling a tracking thread of the framework to input an image obtained by a camera sensor into a deep neural network, and extracting depth feature points; (2) based on a lightweight semantic segmentation network model, performing semantic segmentation on the input video sequence to obtain a semantic segmentation result, and obtaining semantic prior information of a dynamic object in the scene; (3) removing the depth feature points extracted in the step (1) according to the semantic prior information in the step (2), removing the feature points on the dynamic object,and improving the positioning precision in the dynamic scene; and (4) obtaining a static point cloud corresponding to the key frame selected by the tracking thread according to the semantic segmentation result in the step (2), performing static point cloud splicing according to the key frame pose obtained in the step (3), and constructing a dense global point cloud map in real time.

Description

technical field [0001] The present invention relates to a robust visual SLAM method based on semantic prior information and deep learning features, which is a visual SLAM algorithm that combines semantic prior information and more robust deep learning features, and has better performance in weak texture and dynamic scenes. good adaptability. Background technique [0002] Visual SLAM uses cameras and other sensors as sensors. The cost is low and the information obtained is closer to the cognitive level of human beings. It has been widely used in mobile robots and other fields. Compared with the outdoor scene, the indoor scene has no drastic illumination changes, and the robot moves at a lower speed, so it is the main workplace of the mobile robot. Although indoor navigation is safer than outdoor environments, the indoor environment is more complex and obstacles are denser. To apply SLAM technology to indoor robot navigation, at least two challenges need to be solved. [000...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/34G06K9/46B25J9/16
CPCB25J9/1666B25J9/1697G06V20/10G06V10/267G06V10/462
Inventor 崔林艳赖嵩
Owner BEIHANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products