[0040] Example 1:
[0041] like figure 1 As shown, the present invention provides a multi-source information fusion robot positioning method for unstructured environment, including:
[0042] Use the real-time laser point cloud emitted by the multi-line lidar and the constructed point cloud map to perform point cloud registration to calculate the current robot body pose information;
[0043] Use the binocular camera to obtain the image information of the current environment, extract the ORB visual feature points in each frame of the image to form a visual key frame, and relocate and match the current key frame and the built visual feature map to realize the current position information of the robot. Obtain;
[0044] At the same time, during the movement of the robot, the acceleration of the inertial measurement unit is integrated and processed to output the odometer information;
[0045]Based on the pose information obtained by the above three sensors, the real-time state of the robot is estimated by filtering, and the final positioning information is fused and output.
[0046] like figure 1 As shown, the main steps of this embodiment are:
[0047] S1. In the initial stage of the robot's work, firstly extract the geometric features of the working environment according to the current real-time laser point cloud. The calculation method of the environmental geometric features used here is to calculate the adjacent ten points of each laser point on each laser scanning line. The curvature value of a laser point, when the curvature value of the point is the maximum value among the twelve laser points, it is taken as the feature salient point, which represents the feature salient area in the environment. When the curvature value of the point is the minimum value among the twelve laser points, it is taken as the weakest feature point, and the above two feature points constitute the geometric feature point. In areas such as pipe gallery and roadway, the significant reduction of feature salient points will lead to the failure of laser point cloud registration and positioning. Here, 20% of the proportion of feature salient points to geometric feature points is set as the threshold. In geometrically degraded areas such as roadways and pipe corridors, the number of feature salient points is much less than the weakened feature salient points. If the missing geometric features of the current environment are seriously lower than the set threshold, the positioning information will be transferred to the inertial measurement unit to obtain, if the current environment Geometric corners and other features are rich. Laser feature corner points, that is, feature prominent points, and feature plane points, that is, weakened feature prominent points, have a proportion higher than the set threshold, and then perform laser point cloud registration. The point cloud feature map of the current environment where the robot is located is divided into uniform, regular and fixed-size cells according to the preset resolution, and the probability density function of the grid is calculated according to a certain number of scanning points in the cell. Use the initial pose transformation to convert the real-time laser point cloud to the existing target point cloud map coordinate system, calculate the total probability of the transformed point cloud to be positioned and register, and build an optimized objective function based on this value to iterate until the optimal point cloud is obtained. The optimal registration transformation relationship, that is, the positioning information of the robot, is then sent to the filtering and fusion processing unit.
[0048] S2. In the initial stage of the robot's work, start the high-precision inertial measurement unit. By integrating the acceleration and angular velocity of the robot motion obtained by the measurement unit, the pose information of the robot is obtained and sent to the filtering and fusion processing unit.
[0049] S3. At the initial stage of the robot's work, according to the real-time image of the robot's surrounding environment detected by the binocular vision camera, determine whether the current robot's working environment has sufficient lighting. If the extracted visual feature points are lower than the set threshold, transfer the positioning information to Inertial measurement unit acquisition. If the real-time ORB visual feature extracted from the current environment is higher than the set threshold, the visual image is relocated according to the built visual feature map, and the matching process is accelerated by the visual feature word bag model and the K-D tree storage structure. In the stage of clustering and segmenting visual feature points, by constructing a tree-like storage structure, the existing feature points are firstly segmented into k classes, and then one class of features is segmented again by k classes, and the same operation is performed for other classes until the segmentation reaches Layer D. When the image is relocated in real time, the visual feature points in the current image frame and the tree structure are compared and searched to realize the search from the root of the tree structure layer by layer, and the search is skipped for the visual categories with low matching degree, so that Avoid low-matching visual feature categories to speed up matching search. Realize the acquisition of the current robot pose information and send the data to the filtering and fusion processing unit.
[0050] The above steps S1, S2 and S3 are performed synchronously in the robot initialization phase.
[0051] Step S4, by constructing a state prediction model of the mobile robot, as shown in formula (1)
[0052]
[0053] in, is the current state of the robot, and the description of the system state is the current position p and speed v of the robot; B k is the control matrix; Describes control commands such as acceleration sent to the robot; F k It is a state transition matrix, which converts the position and velocity state values of the previous moment to the current moment. In order to describe the update of the uncertainty of the state at the previous moment to the current moment, formula (2) is constructed to represent:
[0054] P k =F k P k-1 F k T +Q k (2)
[0055] Among them, P k-1 is the state information at the last moment, that is, the covariance matrix of position and velocity; P k is the state information at the current moment; during the movement of the robot, due to uncontrolled factors such as uneven ground and changes in wind power, the Q k Define external uncertainty factors. After obtaining the lidar positioning data, inertial measurement unit measurement value and integral data, and visual relocation data, that is, the current robot state (position, speed) measurement value, the observation model of the robot state is constructed by the measurement value, such as formula (3) shown:
[0056] (μ,∑)=(z k , R k ) (3)
[0057] Among them, considering that the current observation model state conforms to the Gaussian distribution, μ is the mean value of the current prediction model, which is the stable value of the measurement value; ∑ is the uncertainty of the measurement value distribution of the prediction model, that is, the variance; z k used to describe the mean of the measurements; R k Used to describe the uncertainty of this measurement. Through the fusion processing of the prediction model and the measurement model, the prediction model and the measurement model are both Gaussian distributions, and they are independent and identically distributed. By multiplying the Gaussian distributions of the prediction and measurement, the mean and variance of the next state are obtained, that is, the robot can be obtained The state of the distribution of predicted values for the next state. Using the distribution state of the measurement model, the distribution state of the prediction model is superimposed and multiplied to obtain the state distribution at the next moment to realize the update of the prediction model. This embodiment realizes the acquisition of positioning information of the robot in an unstructured environment through data fusion processing of multiple sensors such as multi-line laser radar, binocular vision, and inertial measurement unit, and has the advantages of interference with dynamic obstacles and complex weather changes. Strong adaptability.
[0058] In this embodiment, the real-time data of multiple sensors, such as multi-line laser radar, binocular inertial vision, inertial measurement unit, etc., are fused and processed, so as to realize the high-speed operation under severe weather conditions, dynamic unstructured environment, and severe illumination changes. Robust real-time localization. In the roadway area with degraded geometry, the registration and positioning of the multi-line laser radar fails due to the lack of sufficient external features. At this time, the visual positioning module can still work normally and complete the robot relocation by detecting the matching image information. In an environment lacking sufficient light, the lidar and inertial measurement unit can still provide sufficient localization fusion data input for the computing unit. The upper-layer computing unit plans the motion route and sends control commands to the lower-layer motion control module according to the current pose of the robot, and obtains the current state of the robot fed back by the lower-layer in real time, realizing the tight coupling between positioning and control and ensuring the stability of the mobile robot in the working environment. run