Multi-source information fusion robot positioning method and system for unstructured environment

A technology of multi-source information fusion and robot positioning, which is applied in the direction of radio wave measurement systems, instruments, and utilization of re-radiation, etc., to ensure stable operation and achieve the effect of tight coupling

Pending Publication Date: 2022-07-01
SHANDONG YOUBAOTE INTELLIGENT ROBOTICS CO LTD
0 Cites 0 Cited by

AI-Extracted Technical Summary

Problems solved by technology

[0004] In order to solve the above problems, the present invention proposes a multi-source information fusion robot positioning method and system for unstructured environments. The present invention solves the problem of mobile po...
View more

Method used

In specific implementation, by robot state predictive modeling, with multi-line laser radar, binocular vision, inertial measurement unit measured value is measured value and carries out observation modeling, multi-source data is fused in computing processing unit module Process and obtain the robot pose information that is closest to the real state. According to the current position of the robot, the calculation processing unit performs trajectory planning and calculation on the position point and the planned path point to obtain the next moving speed and steering angle of the robot, and sends the motion control command to the control module. Instructions to act, at the same time, the calculation processing unit module continuously corrects the robot's movement speed and steering angle according to the distance and angle between the current feedback position and the target position, and realizes the coupling strategy with the control module to ensure the stable operation of the robot
In this embodiment, the real-time data of multiple sensors such as multi-line laser radar, binocular inertial vision, and inertial measurement unit are fused and processed, so that under severe weather conditions, under the dynamic unstructured environment, the conditions of severe illumination changes Highly robust real-time localization. In the geometrically degraded roadway area, the registration and positioning of the multi-line lidar fails due to the lack of sufficient external features. At this time, the visual positioning module can still work normally and complete the robot relocation by detecting and matching image information. In an environment that lacks sufficient light, the lidar and inertial measurement unit can still provide sufficient positioning fusion data input for the computing unit. The upper-layer com...
View more

Abstract

According to the multi-source information fusion robot positioning method and system for the unstructured environment provided by the invention, fusion processing is carried out on multi-source information data such as laser point cloud, environment image information and acceleration which are acquired in real time, so that the multi-source information fusion robot positioning method and system for the unstructured environment can be realized in a severe weather state and a dynamic unstructured environment. Carrying out high-robustness real-time positioning under the condition of violent illumination change; in a geometrically degraded roadway area, registration positioning of laser point cloud emission fails due to lack of enough external characteristics, and at the moment, the visual positioning module can still work normally and complete robot repositioning by detecting matched image information; in an environment lacking enough illumination, the laser point cloud and acceleration information can still provide enough positioning fusion data input for the computing unit.

Application Domain

Navigational calculation instrumentsNavigation by speed/acceleration measurements +1

Technology Topic

Severe weatherEngineering +9

Image

  • Multi-source information fusion robot positioning method and system for unstructured environment
  • Multi-source information fusion robot positioning method and system for unstructured environment
  • Multi-source information fusion robot positioning method and system for unstructured environment

Examples

  • Experimental program(5)

Example Embodiment

[0040] Example 1:
[0041] like figure 1 As shown, the present invention provides a multi-source information fusion robot positioning method for unstructured environment, including:
[0042] Use the real-time laser point cloud emitted by the multi-line lidar and the constructed point cloud map to perform point cloud registration to calculate the current robot body pose information;
[0043] Use the binocular camera to obtain the image information of the current environment, extract the ORB visual feature points in each frame of the image to form a visual key frame, and relocate and match the current key frame and the built visual feature map to realize the current position information of the robot. Obtain;
[0044] At the same time, during the movement of the robot, the acceleration of the inertial measurement unit is integrated and processed to output the odometer information;
[0045]Based on the pose information obtained by the above three sensors, the real-time state of the robot is estimated by filtering, and the final positioning information is fused and output.
[0046] like figure 1 As shown, the main steps of this embodiment are:
[0047] S1. In the initial stage of the robot's work, firstly extract the geometric features of the working environment according to the current real-time laser point cloud. The calculation method of the environmental geometric features used here is to calculate the adjacent ten points of each laser point on each laser scanning line. The curvature value of a laser point, when the curvature value of the point is the maximum value among the twelve laser points, it is taken as the feature salient point, which represents the feature salient area in the environment. When the curvature value of the point is the minimum value among the twelve laser points, it is taken as the weakest feature point, and the above two feature points constitute the geometric feature point. In areas such as pipe gallery and roadway, the significant reduction of feature salient points will lead to the failure of laser point cloud registration and positioning. Here, 20% of the proportion of feature salient points to geometric feature points is set as the threshold. In geometrically degraded areas such as roadways and pipe corridors, the number of feature salient points is much less than the weakened feature salient points. If the missing geometric features of the current environment are seriously lower than the set threshold, the positioning information will be transferred to the inertial measurement unit to obtain, if the current environment Geometric corners and other features are rich. Laser feature corner points, that is, feature prominent points, and feature plane points, that is, weakened feature prominent points, have a proportion higher than the set threshold, and then perform laser point cloud registration. The point cloud feature map of the current environment where the robot is located is divided into uniform, regular and fixed-size cells according to the preset resolution, and the probability density function of the grid is calculated according to a certain number of scanning points in the cell. Use the initial pose transformation to convert the real-time laser point cloud to the existing target point cloud map coordinate system, calculate the total probability of the transformed point cloud to be positioned and register, and build an optimized objective function based on this value to iterate until the optimal point cloud is obtained. The optimal registration transformation relationship, that is, the positioning information of the robot, is then sent to the filtering and fusion processing unit.
[0048] S2. In the initial stage of the robot's work, start the high-precision inertial measurement unit. By integrating the acceleration and angular velocity of the robot motion obtained by the measurement unit, the pose information of the robot is obtained and sent to the filtering and fusion processing unit.
[0049] S3. At the initial stage of the robot's work, according to the real-time image of the robot's surrounding environment detected by the binocular vision camera, determine whether the current robot's working environment has sufficient lighting. If the extracted visual feature points are lower than the set threshold, transfer the positioning information to Inertial measurement unit acquisition. If the real-time ORB visual feature extracted from the current environment is higher than the set threshold, the visual image is relocated according to the built visual feature map, and the matching process is accelerated by the visual feature word bag model and the K-D tree storage structure. In the stage of clustering and segmenting visual feature points, by constructing a tree-like storage structure, the existing feature points are firstly segmented into k classes, and then one class of features is segmented again by k classes, and the same operation is performed for other classes until the segmentation reaches Layer D. When the image is relocated in real time, the visual feature points in the current image frame and the tree structure are compared and searched to realize the search from the root of the tree structure layer by layer, and the search is skipped for the visual categories with low matching degree, so that Avoid low-matching visual feature categories to speed up matching search. Realize the acquisition of the current robot pose information and send the data to the filtering and fusion processing unit.
[0050] The above steps S1, S2 and S3 are performed synchronously in the robot initialization phase.
[0051] Step S4, by constructing a state prediction model of the mobile robot, as shown in formula (1)
[0052]
[0053] in, is the current state of the robot, and the description of the system state is the current position p and speed v of the robot; B k is the control matrix; Describes control commands such as acceleration sent to the robot; F k It is a state transition matrix, which converts the position and velocity state values ​​of the previous moment to the current moment. In order to describe the update of the uncertainty of the state at the previous moment to the current moment, formula (2) is constructed to represent:
[0054] P k =F k P k-1 F k T +Q k (2)
[0055] Among them, P k-1 is the state information at the last moment, that is, the covariance matrix of position and velocity; P k is the state information at the current moment; during the movement of the robot, due to uncontrolled factors such as uneven ground and changes in wind power, the Q k Define external uncertainty factors. After obtaining the lidar positioning data, inertial measurement unit measurement value and integral data, and visual relocation data, that is, the current robot state (position, speed) measurement value, the observation model of the robot state is constructed by the measurement value, such as formula (3) shown:
[0056] (μ,∑)=(z k , R k ) (3)
[0057] Among them, considering that the current observation model state conforms to the Gaussian distribution, μ is the mean value of the current prediction model, which is the stable value of the measurement value; ∑ is the uncertainty of the measurement value distribution of the prediction model, that is, the variance; z k used to describe the mean of the measurements; R k Used to describe the uncertainty of this measurement. Through the fusion processing of the prediction model and the measurement model, the prediction model and the measurement model are both Gaussian distributions, and they are independent and identically distributed. By multiplying the Gaussian distributions of the prediction and measurement, the mean and variance of the next state are obtained, that is, the robot can be obtained The state of the distribution of predicted values ​​for the next state. Using the distribution state of the measurement model, the distribution state of the prediction model is superimposed and multiplied to obtain the state distribution at the next moment to realize the update of the prediction model. This embodiment realizes the acquisition of positioning information of the robot in an unstructured environment through data fusion processing of multiple sensors such as multi-line laser radar, binocular vision, and inertial measurement unit, and has the advantages of interference with dynamic obstacles and complex weather changes. Strong adaptability.
[0058] In this embodiment, the real-time data of multiple sensors, such as multi-line laser radar, binocular inertial vision, inertial measurement unit, etc., are fused and processed, so as to realize the high-speed operation under severe weather conditions, dynamic unstructured environment, and severe illumination changes. Robust real-time localization. In the roadway area with degraded geometry, the registration and positioning of the multi-line laser radar fails due to the lack of sufficient external features. At this time, the visual positioning module can still work normally and complete the robot relocation by detecting the matching image information. In an environment lacking sufficient light, the lidar and inertial measurement unit can still provide sufficient localization fusion data input for the computing unit. The upper-layer computing unit plans the motion route and sends control commands to the lower-layer motion control module according to the current pose of the robot, and obtains the current state of the robot fed back by the lower-layer in real time, realizing the tight coupling between positioning and control and ensuring the stability of the mobile robot in the working environment. run

Example Embodiment

[0059] Example 2:
[0060] This embodiment provides a multi-source information fusion robot positioning system for an unstructured environment, including:
[0061] The laser point cloud processing module is configured to: obtain the real-time laser point cloud emitted by the robot, perform point cloud registration with the preset point cloud map, and calculate the pose information of the current robot;
[0062] The image information processing module is configured to: obtain the image information of the current environment of the robot, extract the ORB visual feature points in each frame of the image to form a visual key frame, and relocate and match the current key frame and the preset visual feature map to obtain The current position information of the robot;
[0063] The acceleration processing module is configured to: obtain the acceleration of the robot, and obtain the odometer information by integrating;
[0064] The positioning information prediction module is configured to: filter the obtained pose information, position information and odometer information, estimate the real-time state of the robot, and fuse the positioning information of the robot.
[0065] The working method of the system is the same as that of the multi-source information fusion robot positioning method for unstructured environment in Embodiment 1, and will not be repeated here.

Example Embodiment

[0066] Example 3:
[0067] The present embodiment provides a control system based on a multi-source information fusion positioning robot in an unstructured environment, including:
[0068] The multi-line laser radar positioning module is configured to: emit a multi-line laser point cloud to scan the physical environment around the robot, perform registration and positioning based on the real-time laser point cloud and the existing environment point cloud map; use the laser to generate real-time point cloud information of the robot's surrounding environment , calculate the current robot pose information through the point cloud registration algorithm;
[0069] The binocular vision positioning module is configured to: obtain the real-time image data around the robot, and relocate it with the existing environmental visual feature point map; obtain the real-time image information in front of the robot, and use the image pixel information to extract the visual ORB feature points; According to the visual feature points of the image key frame, the visual dictionary library generated by offline training is used to search and match, so that the current image frame can be looped and relocated in the built global visual feature point map, and the current pose of the robot is output after coordinate transformation;
[0070] an inertial measurement unit module, configured to: obtain position information by integrating the acceleration;
[0071] The upper-layer computing and processing unit module is configured to: fuse the positioning data, eliminate the influence of noise, output the filtered robot pose information, and send motion instructions to the mobile robot; obtain the current angular velocity and acceleration information of the robot, and perform a The integral can obtain the pose information; it is used for the construction of the state prediction model of the robot, the construction of the measurement value model, and the filtering method is used to fuse the laser radar point cloud positioning data, binocular vision positioning data, and inertial measurement unit data to realize the next moment of the robot. Predict the state and correct the current state of the robot. According to the current state of the robot, the calculation processing unit module compares and calculates the path points in the planned path, and outputs the next moving speed, acceleration and steering angle of the robot to the lower control module;
[0072] The lower-layer motion control module is configured to: receive the motion command of the calculation processing unit module, send action commands to the driver of the robot, and feed back the current motion state of the robot to the calculation processing unit module; receive the control command of the upper-layer calculation processing unit module, solve Control the movement speed and steering angle information of the command, and send the information to the drive module to control the actuator motor to realize the robot movement. At the same time, the robot feeds back the real-time movement information to the upper-level computing processing unit module.
[0073] In the specific implementation, by predicting and modeling the state of the robot, using the multi-line laser radar, binocular vision, and inertial measurement unit measurement values ​​to conduct observation modeling, the multi-source data is fused in the calculation processing unit module to obtain the most Robot pose information close to the real state. According to the current position point of the robot, the calculation processing unit performs trajectory planning calculation on the position point and the planned path point, obtains the next movement speed and steering angle of the robot, and sends the motion control instruction to the control module, and the control module according to At the same time, the calculation processing unit module continuously corrects the movement speed and steering angle of the robot according to the distance and angle between the current feedback position and the target position, so as to realize the coupling strategy with the control module and ensure the stable operation of the robot.
[0074]The working method of the system is the same as that of the multi-source information fusion robot positioning method for unstructured environment in Embodiment 1, and will not be repeated here.

PUM

no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Drainage tank and thinning machine

Owner:JIANGSU JCA ELECTRONICS TECH CO LTD

Anti-falling waste oil collecting bottle

Owner:重庆安特瑞润滑设备有限公司

Classification and recommendation of technical efficacy words

  • Guaranteed stable operation

Rapid lake aquatic plant algae fishing equipment

InactiveCN111005357AIncrease carrier capacityGuaranteed stable operation
Owner:陈颖欣

Dome camera with self-cleaning function

Owner:CHENGDU KECHUANGGU TECH CO LTD
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products