Laser and visual information fused robust positioning and mapping method and system

A visual information and robust technology, applied in the field of robust positioning and mapping methods and systems, can solve the problems of poor positioning accuracy, insufficient information, poor mapping effect, etc., to make up for the lack of closed-loop capability, robot pose Accurate and improve the effect of relocation efficiency

Active Publication Date: 2021-06-18
HUNAN UNIV
View PDF10 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, two-dimensional laser radar can only obtain information on one plane, and the amount of information is insufficient to detect obstacles outside the laser scanning plane.
At present, the price of 3D lidar is generally very expensive. Due to cost considerations, it is mainly used in the field of driverless cars and is not suitable for use in robots.
Thanks to the rapid development of camera technology and computer performance, visual SLAM has gradually emerged many excellent positioning and mapping methods, but the common problem is that there will be cumulative errors in large environments, and the camera moves too fast, the lighting conditions are extreme, and the scene In the case of severe lack of texture features, visual SLAM has problems such as tracking failure, poor positioning accuracy, and poor mapping effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Laser and visual information fused robust positioning and mapping method and system
  • Laser and visual information fused robust positioning and mapping method and system
  • Laser and visual information fused robust positioning and mapping method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051] Such as figure 1 with figure 2 As shown, the robust positioning and mapping method for fusion of laser and visual information in this embodiment includes:

[0052] 1) The non-visual pose of the robot is obtained by fusing the poses of the inertial measurement unit IMU, odometer and lidar T t m , which can eliminate the cumulative error generated by the inertial measurement unit IMU over time;

[0053] 2) According to the current frame in the visual image and the reference key frame, perform feature extraction and matching to estimate the visual pose of the robot. If the estimation fails, use the non-visual pose T t m Perform non-visual assisted relocation, and finally obtain the visual pose of the robot T t v ;

[0054] 3) The visual pose of the robot T t v and non-visual pose T t m Perform fusion to obtain the fused robot pose T t f ;

[0055] 4) For the fused robot pose T t f Carry out closed-loop detection, and use a closed-loop optimization ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a laser and visual information fused robust positioning and mapping method and system. The method comprises the following steps: performing fusion calculation on a non-visual pose Ttm; estimating a visual pose Ttv according to the visual image, and if the estimation fails, performing non-visual auxiliary repositioning by using the non-visual pose Ttm to calculate the visual pose Ttv; and fusing the visual pose Ttv and the non-visual pose Ttm to obtain a fused robot pose Ttf, carrying out closed-loop detection, and carrying out global optimization on the robot pose by adopting a closed-loop optimization algorithm based on graph optimization after a closed loop is detected, so as to obtain a two-dimensional grid map and a three-dimensional point cloud map. According to the method, information of various sensors can be fused under the severe conditions that the robot moves too fast, the illumination environment is extreme, and scene texture features are lost, the problem that visual SLAM tracking is lost is solved, the positioning precision of the robot is improved, and meanwhile, an accurate two-dimensional grid map and an accurate three-dimensional point cloud map are constructed.

Description

technical field [0001] The invention relates to a robot positioning technology for multi-sensor information fusion, in particular to a robust positioning and mapping method and system for laser and visual information fusion. Background technique [0002] In recent years, with the rapid development of robot technology, Simultaneous Localization and Mapping (SLAM), as the basic key technology in the field of robotics, still has some unresolved problems. For SLAM solutions using different sensors, it can be mainly divided into lidar SLAM and visual SLAM. The main sensor used in lidar SLAM is lidar. Lidar has the advantages of high precision, fast speed, and is not affected by ambient light, and can accurately obtain distance and angle information from obstacles. However, two-dimensional laser radar can only obtain information on one plane, and the amount of information is insufficient to detect obstacles outside the laser scanning plane. At present, the price of 3D lidar is g...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G01C21/20G01C21/16G01C21/00G01S7/48G06T7/73
CPCG01C21/005G01C21/165G01C21/20G01S7/4802G06T7/73
Inventor 李树涛洪骞孙斌
Owner HUNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products