Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

GPS-fused robot vision inertial navigation integrated positioning method

A technology of robot vision and combined positioning, which is applied in the field of combined positioning of robot vision and inertial navigation integrated with GPS, can solve problems such as inability to achieve high-precision composition positioning, and achieve the effect of avoiding computing power consumption and reducing estimation errors.

Active Publication Date: 2020-05-08
NANJING UNIV OF SCI & TECH
View PDF9 Cites 23 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, for the long-term and large-scale inspection tasks required by airport inspections, SLAM technology is limited by the computing power of industrial computers and cannot achieve large-scale scenes and long-term high-precision composition positioning

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • GPS-fused robot vision inertial navigation integrated positioning method
  • GPS-fused robot vision inertial navigation integrated positioning method
  • GPS-fused robot vision inertial navigation integrated positioning method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] Such as figure 1 As shown, a robot visual inertial navigation combined positioning method fused with GPS includes the following steps:

[0018] Step 1. Extract the feature points of the left camera image at the current moment according to the feature points of the left camera image at the previous moment and perform feature point matching, and extract the feature points of the right camera image at the current moment according to the feature points of the left camera image at the current moment and perform feature point matching. Matching, using the above matching feature points to calculate the three-dimensional coordinates of the feature points and the relative pose of the image frame, such as figure 2 As shown, the specific steps are:

[0019] Step 1-1, use the goodFeatureToTrack() function in opencv to extract the Shi-Tomashi corner points of the first frame image of the left camera, and use the LK optical flow method to track the subsequent image feature points o...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a GPS-fused robot vision inertial navigation integrated positioning method. The method comprises the following steps: extracting and matching feature points of left and right images and front and back images of a binocular camera, and calculating three-dimensional coordinates of the feature points and relative poses of image frames; selecting a key frame in an image stream,creating a sliding window, and adding the key frame into the sliding window; calculating a visual reprojection error, an IMU pre-integration residual error and a zero offset residual error and combining the errors into a joint pose estimation residual error; carrying out nonlinear optimization on the joint pose estimation residual error by using an L-M method to obtain an optimized visual inertial navigation (VIO) robot pose; if the GPS data exist at the current moment, performing adaptive robust Kalman filtering on the GPS position data and the VIO pose estimation data to obtain a final robot pose; and if no GPS data exist, replacing the final pose data with the VIO pose data. According to the method, the positioning precision of the robot is improved, the calculation consumption is reduced, and the demands of large-range and long-time inspection are satisfied.

Description

technical field [0001] The invention belongs to the technical field of automatic inspection, and specifically relates to a GPS-fused robot visual inertial navigation combined positioning method. Background technique [0002] For autonomous inspection robots, pose estimation is very important, and it is the basis for robots to complete inspection tasks. The traditional method includes differential GPS pose estimation, which receives satellite signals through two GPS antennas, and calculates the position, attitude and speed of the inspection robot, and the accuracy can reach centimeter level. However, the existing high-precision differential GPS solution is relatively expensive, and at the same time needs to avoid tall obstacles, so it is only suitable for robot positioning in open areas. At this stage, the robot simultaneous composition and positioning (SLAM) technology is more popular at this stage. The local map is constructed by fully sensing the surrounding environment t...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/16G01C21/00G01S19/49G01S19/48G06T7/73
CPCG01C21/165G01C21/005G01S19/49G01S19/48G06T7/73G06T2207/10016G06T2207/20024Y02T10/40
Inventor 郭健黄迪李胜吴益飞钱抒婷吕思聪朱佳森朱文宇
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products