Autonomous positioning navigation method for mobile detection robot

A technology of autonomous positioning and navigation methods, applied in the direction of navigation computing tools, etc., can solve problems that affect the accuracy of robot positioning and are susceptible to noise

Inactive Publication Date: 2019-02-15
HARBIN UNIV OF SCI & TECH
View PDF6 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to solve the problem that the traditional wavelet transform method is often used to extract the characteristics of the target image in the early stage of the

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Autonomous positioning navigation method for mobile detection robot
  • Autonomous positioning navigation method for mobile detection robot
  • Autonomous positioning navigation method for mobile detection robot

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

Specific embodiment one:

[0025] The autonomous positioning and navigation method of a mobile detection robot in this embodiment, such as figure 1 As shown, the method is implemented through the following steps:

[0026] Step 1. The sensor reads the image information, and then preprocesses the image information, and the preprocessed information is transferred to the V-SLAM system; among them, the visual SLAM system (visual SLAM) refers to the image as the main environmental perception information source The real-time positioning and map construction system of V-SLAM can be applied to application fields such as unmanned driving and augmented reality. It is a hot research direction in recent years; V-SLAM system refers to the SLAM based on the visual sensor; A series of continuously changing images for positioning and map construction;

[0027] Step 2: The estimation process of visual odometer, also called perceptual front end:

[0028] Estimate the pose information of the camera moti...

Example Embodiment

Specific implementation manner two:

[0037] The difference from the first embodiment is that in the autonomous positioning and navigation method of a mobile detection robot in this embodiment, the estimation process of the visual odometer in step 2 is specifically: the core problem to be solved is the camera between adjacent images Sports like Figure 2a-Figure 2b Shown, obviously Figure 2a Rotate left to get Figure 2b , This is a direct reflection of the human eye. It’s a camera, as you can see from the picture, Figure 2b The central part on the left of the image appears more in the right picture, that is, the cabinets in the distance appear more in the picture; Figure 2b A part of the cabinet at the corresponding position disappears from the picture. From the above information, the camera's movement trend can be judged perceptually: the camera is photographing Figure 2b You can shoot after rotating the position to the left Figure 2b .

[0038] But this can only perceptual...

Example Embodiment

Specific implementation manner three:

[0039] Different from the first or second embodiment, in the autonomous positioning and navigation method of a mobile detection robot in this embodiment, the back-end optimization process described in step 3 is specifically: In a general sense, the main task of the back-end is optimization Noise data during SLAM. Physically speaking, as long as there is measurement, there will be errors. Therefore, the data obtained by accurate sensors will also contain errors. Some low-cost sensors have even greater errors. The main problem that the back-end optimization solves is to estimate the overall state of the system from noisy sensor data: including the robot's own trajectory, the map of the surrounding environment, and the uncertainty of the results obtained from the above state estimation, which is also Called the maximum posterior probability estimation;

[0040] The visual odometer is also called the visual SLAM front-end. The main task is to p...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

An autonomous positioning navigation method for a mobile detection robot belongs to the field of robot vision navigation. In the early stage of the existing robot positioning, a traditional wavelet transform method is used to extract the features of a target image, which is susceptible to noise and affects the positioning accuracy of a robot. The method of the invention comprises the following steps of: reading image information by a sensor, preprocessing the image information, and transmitting the preprocessed information to a V-SLAM system; predicting a transformation matrix of two adjacentframe images through an improved RCNN network, estimating pose information of a camera motion and building a local model of an environment; passing to a backend and optimizing to obtain an accurate camera trajectory and a map; determining whether the robot has reached a certain position before by the received sensor information, a visual odometer, and local backend information; passing the correction information to the backend for optimization and calculation when a correct loop is detected; and building a map so that the established map corresponds to the task requirements.

Description

technical field [0001] The invention relates to an autonomous positioning and navigation method of a mobile detection robot. Background technique [0002] The pose estimation of the robot refers to the process of obtaining the current position and attitude in real time through various sensors installed by itself. The current mainstream pose estimation algorithms mainly include: Global Navigation Satellite System (Global Navigation Satellite System, GNSS), Inertial Navigation System (Inertial Navigation System), LiDAR navigation, visual navigation, etc. These navigation methods can independently complete pose calculation, extraction and estimation. Compared with the traditional GNSS satellite inertial navigation combination method for pose estimation, it is most widely used on outdoor inspection robots or small UAVs. The technology is relatively mature, and the pose estimation accuracy is high. However, the GPS satellite signal is easily affected by the environment, and com...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G01C21/20
CPCG01C21/20
Inventor 何召兰何乃超张庆洋姚徐丁淑培
Owner HARBIN UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products