Method for completing visual SLAM closed-loop detection by fusing semantic information

A closed-loop detection and semantic information technology, applied in the field of map creation, can solve problems such as visual sensor accumulation errors, and achieve the effects of small calculation, good real-time performance, and good robustness

Inactive Publication Date: 2020-11-03
广州万维创新科技有限公司
View PDF11 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The purpose of the present invention is to solve the problem of cumulative error of the visual sensor that occurs during the simultaneous positioning of the robot and the process of map construction, and to provide a closed-loop detection problem for the robot under complex lighting conditions, reducing the uncertainty caused by the robot pose and map variables Visual SLAM closed-loop detection method based on the fusion of semantic information for errors caused by sex

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for completing visual SLAM closed-loop detection by fusing semantic information
  • Method for completing visual SLAM closed-loop detection by fusing semantic information
  • Method for completing visual SLAM closed-loop detection by fusing semantic information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] The present invention will be further described below in conjunction with accompanying drawing and embodiment

[0022] A method for fusion of semantic information to complete visual SLAM closed-loop detection, such as image 3 , including the following steps;

[0023] Object recognition step; use the object detector to recognize the objects in the image. The joint training method in YOLO9000 can be used to train the target detector with the detection data set and the classification data set at the same time. The object detection dataset is used to learn the accurate localization of objects, and the classification dataset is used to increase the number of detected object categories and the robustness of the detector. In this step, YOLO9000 trained by COCO target detection dataset and ImageNet image classification dataset can detect more than 9000 types of targets in real time. In order to improve the accuracy of detecting items.

[0024] Stereo matching step: select ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for completing visual SLAM closed-loop detection by fusing semantic information, and the method comprises an object recognition step of recognizing an object in an image through a target detector, and a stereo matching step of taking the recognized object as a stereo matching element according to related constraint conditions, a three-dimensional position estimation step of completing the estimation of the three-dimensional position of the object by using the mathematical model of the binocular camera; an attribute graph model construction step of extracting the object in the image and the spatial distribution information of the object and establishing a corresponding attribute graph set for the image set; and a similarity measurement step of matching and comparing the two attribute graphs. According to the method, the target detector is used for detecting and identifying the object in the image, while the existing image-oriented object detection methodhas good real-time performance, the subsequent calculation step is completed based on the identification result, the amount of data participating in the operation is very small, and the correspondingcalculation amount is relatively small, so that the method has good real-time performance, and good robustness can be maintained under a complex illumination condition.

Description

technical field [0001] The invention belongs to the field of map creation, and in particular relates to a visual SLAM closed-loop detection method with fusion of semantic information. Background technique [0002] In simultaneous localization and map creation (SLAM), closed-loop detection refers to judging whether the robot is in a certain area visited before according to the information obtained by the sensor, or whether the current location of the robot is in the created map. With a corresponding description. In the SLAM method based on graph optimization, closed-loop detection is a very critical link. Correct closed-loop detection helps to correct the odometer error, so as to obtain a map with small error and consistent global information, but wrong closed-loop detection will increase Errors can even destroy the entire map. [0003] The current mainstream loop closure detection method is based on key frames. One of the very common image matching methods based on visual...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/05G06T7/73G06K9/62G06K9/00G06N3/04G06N3/08
CPCG06T17/05G06T7/73G06N3/08G06V20/20G06V2201/07G06N3/045G06F18/22
Inventor 朱泽凡
Owner 广州万维创新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products