Visual SLAM method based on semantic segmentation dynamic points

A semantic segmentation and dynamic point technology, applied in the field of computer vision, can solve problems such as inaccurate camera pose estimation, reduced robustness of visual odometry, and inability to build a globally consistent map, achieving the goal of improving accuracy and accuracy Effect

Pending Publication Date: 2021-10-19
CHANGCHUN UNIV OF TECH
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most visual SLAM schemes assume that the object is still. However, dynamic targets often appear in real scenes, which reduces the robustness of visual odometry, causes inaccurate camera pose estimation, and cannot build a globally consistent map.
In addition, the geometric map constructed by visual SLAM main

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual SLAM method based on semantic segmentation dynamic points
  • Visual SLAM method based on semantic segmentation dynamic points
  • Visual SLAM method based on semantic segmentation dynamic points

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0063] A visual SLAM method for semantically segmented dynamic points:

[0064] Step 1. After the camera collects the RGB-D image data, first pass the RGB image to the convolutional neural network (CNN), separate out all prior dynamic objects, complete the semantic segmentation task for the image, and obtain all the dynamic objects in a picture. The concealment of objects. Considering that feature points are easy to appear on the object boundary, the mask is expanded to expand the dynamic object boundary and eliminate the feature points of the dynamic object boundary. Then, on this basis, the ORB feature points of the image are extracted, and the camera pose is estimated by feature matching. Therefore, the mask obtained by using Mask R-CNN can retain the feature points of the static part of the image as input in the subsequent stage, thereby improving the system. Robustness in dynamic environments.

[0065] Step 2. Although most dynamic objects can be eliminated by using Mas...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual SLAM method based on semantic segmentation dynamic points, and relates to the technical field of computer vision. The method comprises the following steps: acquiring environment image information through an RGB-D camera, and performing feature extraction and semantic segmentation on an obtained image to obtain an extracted ORB feature point and a semantic segmentation result; using a dynamic object detection algorithm based on multi-view geometric constraints to detect residual dynamic objects and reject dynamic feature points; and tracking, local mapping and loopback detection threads are executed in sequence, so that an accurate static scene octree three-dimensional semantic map is constructed in a dynamic scene, and finally the visual SLAM method based on semantic segmentation dynamic points facing the dynamic scene is realized.

Description

Technical field: [0001] The present invention relates to the technical field of computer vision, and more specifically, relates to a visual SLAM method based on semantically segmented dynamic points. Background technique: [0002] Simultaneous localization and mapping (SLAM, simultaneous localization and mapping) research has a long history. It was first proposed by Smith et al., and then gradually improved by scholars. It refers to the fact that the robot estimates its own position through the information obtained by the mounted sensors when the environment is unknown, and at the same time constructs a map of the perceived surrounding environment. Visual SLAM is a system that uses cameras as sensors to complete positioning and mapping tasks. It is a prerequisite for mobile robots to complete intelligent tasks, and has become a hot spot in the current research on autonomous mobile navigation of robots. [0003] At present, researchers have found many mature algorithms, such...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/10G06T17/00G06N3/04G06N3/08
CPCG06T7/10G06T17/00G06N3/08G06T2207/10004G06T2207/20081G06T2207/20084G06T2207/20076G06N3/045
Inventor 唐新星刘新刘忠旭陈永刚刘博聪陈国梁项天野
Owner CHANGCHUN UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products