Laser vision strong coupling SLAM method based on adaptive factor graph
An adaptive factor and strong coupling technology, which is applied in the re-radiation of electromagnetic waves, measuring devices, instruments, etc., can solve the problem of low utilization of feature information, insufficient integration of vision system and radar system, and inability to dynamically adjust multi-sensor data fusion Methods and other issues to achieve the effect of improving accuracy, stability, and strong robustness
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment 1
[0065] Embodiment one, refer to figure 1 This embodiment will be described. A laser vision strong coupling SLAM method based on an adaptive factor graph described in this embodiment includes:
[0066] Lidar module, monocular camera module, IMU module, laser loop detection unit, factor graph optimizer unit, monocular loop detection unit and map storage module;
[0067] The lidar module includes a laser odometer unit and a scene detection unit;
[0068] The monocular camera module includes a scene detection unit and reprojection error calculation;
[0069] The IMU module includes an IMU pre-integration unit;
[0070] Described definition four kinds of robot scenes, and according to the lidar of collection and monocular camera data, judge the scene where current robot is in by scene detection module;
[0071] The data received by the IMU is preprocessed by the scene where the robot is located, and the relative pose of the robot between the two frames of lidar data is calculat...
Embodiment 2
[0076] Embodiment two, see figure 1 This embodiment will be described. This embodiment is a further limitation of the laser vision strong coupling SLAM method based on the adaptive factor graph described in Embodiment 1. In this embodiment, the scene detection is performed according to the collected lidar and monocular camera data. The module judges the current scene of the robot, including:
[0077] Convert the collected 3D lidar data into a height map;
[0078] Project the 3D point cloud on the X-Y plane grid map. For the case where there are multiple points in the same grid, the point with the highest set Z value is reserved as the value of the grid, and the retained grid points are fitted by the RANSAC method a quadratic curve;
[0079] Delete rasters that do not contain projected points;
[0080] Calculate the space area S enclosed by the fitted quadratic curve:
[0081] If the space area S is smaller than the threshold, it is determined that the robot is in an indoo...
Embodiment 3
[0083] Embodiment three, refer to figure 2 and image 3 This embodiment will be described. This embodiment is a further limitation of the laser vision strong coupling SLAM method based on an adaptive factor graph described in Embodiment 2. In this embodiment, when the robot is in an outdoor scene, determine whether there is enough structure in the scene Information to ensure the stability of feature point extraction:
[0084] Point cloud data for object segmentation processing;
[0085] Any two adjacent laser point cloud data are printed on the same object surface, then the tangent direction of the line segment formed by the two points should point to the laser radar, and the included angle β:
[0086]
[0087] Among them, point A and point B are two adjacent laser point clouds, and point O is the laser radar;
[0088] If the included angle β is greater than the set threshold, the two adjacent laser point cloud data are on the same object;
[0089] If the included ang...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


