Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Robot positioning method with fusion of visual features and IMU information

A robot positioning and visual feature technology, applied in the direction of instruments, computer parts, character and pattern recognition, etc., can solve problems such as insurmountable motion environment, and achieve the effect of improving robustness

Inactive Publication Date: 2019-10-18
ZHEJIANG UNIV OF TECH
View PDF6 Cites 64 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the whole system uses the most basic reference frame tracking model, which cannot overcome the complex motion environment

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot positioning method with fusion of visual features and IMU information
  • Robot positioning method with fusion of visual features and IMU information
  • Robot positioning method with fusion of visual features and IMU information

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach

[0061] Knot figure 2 , the specific implementation method of the inventive method is as follows:

[0062] After connecting the MYNT EYE inertial navigation camera to the EAIBOT mobile car via USB, turn on the Lenovo thinkpad, input the camera operation command, and start the algorithm. In the example, the sampling frequency of the camera is 30HZ, the sampling frequency of the key frame is lower, and the sampling frequency of the IMU is 100HZ.

[0063] Step 1: The present invention first uses the ORB feature extraction algorithm to extract rich ORB feature points from the image frames captured by the MYNT EYE camera, and restores the monocular camera using multi-eye geometric knowledge according to the pixel positions of the feature points between the two frames of images The motion process of the robot is estimated.

[0064] Assume that the coordinates of a point in space are P i =[X i , Y i ,Z i ] T , whose projected pixel coordinates are U i =[u i , v i ] T , the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a robot positioning method with fusion of visual features and IMU information. The invention puts forward a method of fusion of monocular vision with IMU. Visual front-end pose tracking is performed and the post of a robot is estimated by using feature points; an IMU deviation model, an absolute scale and a gravity acceleration direction are estimated by using pure visualinformation; IMU solution is performed to obtain high-precision pose information and thus an initial reference is provided to optimize the search process, and the initial reference, a state quantity and visual navigation information are used for participating in optimization; and a rear end employs a sliding-window-based tightly coupled nonlinear optimization method to realize pose and map optimization. And the computational complexity is fixed while the speedometer is calculated based on a sliding window method, so that the robustness of the algorithm is enhanced.

Description

technical field [0001] The invention relates to an indoor visual positioning method for a robot, in particular to a robot positioning method that fuses visual features and IMU information. Background technique [0002] Simultaneous Localization and Mapping (Simultaneous Localization and Mapping) problem of mobile robot is to let the robot use its own sensor to sense the external information, locate its own position, and then make a map based on this when the environment is not added or uncertain. map building. [0003] Visual localization is a hotspot in the research of robot SLAM. The visual localization method uses the acquired image sequence to extract the motion parameters of the mobile robot by extracting image features and performing feature matching in different frames according to the motion changes of feature points. Monocular SLAM has the problem of initialization scale and tracking scale drift. In complex environments, in order to fill the defects of monocular v...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G01C21/20G01C21/16G01C21/18G06K9/62G06T7/73
CPCG01C21/206G01C21/165G01C21/18G06T7/74G06F18/25
Inventor 禹鑫燚来磊欧林林金燕芳吴加鑫
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products