Robot multi-camera visual-inertial real-time positioning method and device

A multi-camera vision and real-time positioning technology, applied in the field of robot navigation, can solve the problems of blurred imaging, incompetence of pure visual positioning methods, and inconspicuous visual features, and achieve the effect of improving positioning accuracy.

Active Publication Date: 2019-03-22
ZHEJIANG UNIV
View PDF9 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

When the camera field of view is blocked by obstacles, the visual features are not obvious, and the feature texture is highly repetitive and difficult to match, the event

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Robot multi-camera visual-inertial real-time positioning method and device
  • Robot multi-camera visual-inertial real-time positioning method and device
  • Robot multi-camera visual-inertial real-time positioning method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] Below, the technical solution of the present invention will be further described in conjunction with the accompanying drawings and specific embodiments:

[0043] figure 1 It is a schematic flow chart of the multi-camera visual-inertial real-time positioning method for a robot of the present invention, and the present invention discloses a multi-camera visual-inertial real-time positioning method for a robot, comprising the following steps:

[0044] Obtain the current multi-eye image and inertial sensor data of the robot;

[0045] Extract image feature points according to the current image, estimate the current robot pose; reconstruct a 3D point cloud according to the current robot pose, store historical and current point cloud data to maintain the visual point cloud map;

[0046] Complete the initialization and estimate the sensor bias value according to the inertial sensor data, and pre-integrate to obtain the current speed and angle of the robot;

[0047] Optimize t...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a robot multi-camera visual-inertial real-time positioning method and device. The method comprises the steps that current multi-view images and inertial sensor data of a robotare obtained, according to the current images, image feature points are extracted, a current robot pose is estimated, according to the current robot pose, a 3D point cloud is rebuilt, historical and current point cloud data is stored to maintain a vision point cloud map, according to the inertial sensor data, initialization is completed, a sensor bias value is estimated, pre-integration is conducted to obtain the current speed and angle of the robot, according to the vision point cloud map and inertial sensor pre-integration, the current pose is optimized, and the like. Multi-view cameras mentioned in the robot multi-camera visual-inertial real-time positioning method provide a wider view by means of information of multiple view angles, the multiple cameras face differently, the situationthat all views are hindered is rare, and visual features provided by the multiple cameras are richer so that the requirements of features needed for achieving positioning can be almost met.

Description

technical field [0001] The invention relates to robot navigation technology, in particular to a multi-camera visual inertial real-time positioning method and device for a robot. Background technique [0002] At present, more and more different types of robots appear in all aspects of production and life. For the fields of warehousing and logistics, inspection and monitoring, etc., the work requires robots to be able to achieve long-term stable operation in a relatively fixed environment, and can realize Precise self-positioning. When the camera's field of view is blocked by obstacles, the visual features are not obvious, and the feature texture is highly repetitive and difficult to match, the event of positioning and tracking often occurs. In addition, when the robot moves so fast that the image is blurred, the existing pure vision positioning methods are also incompetent. Multi-eye cameras use information from multiple perspectives (overlapping or non-overlapping) to prov...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G01C21/00G01C21/16G01C11/04
CPCG01C11/04G01C21/005G01C21/165
Inventor 熊蓉傅博王越
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products