Indoor robot motion estimation method based on deep learning and visual inertia fusion

A technology of robot motion and deep learning, applied in the field of motion estimation, can solve the problem of low accuracy, and achieve the effect of improving robustness, reducing the influence of positioning accuracy, and high positioning accuracy

Pending Publication Date: 2020-01-21
GUILIN UNIV OF ELECTRONIC TECH
View PDF15 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] What the present invention aims to solve is the problem of low precision in the existing intelligent robot motion estimation system, and provides an indoor robot motion estimation method based on deep learning and visual-inertial fusion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Indoor robot motion estimation method based on deep learning and visual inertia fusion
  • Indoor robot motion estimation method based on deep learning and visual inertia fusion
  • Indoor robot motion estimation method based on deep learning and visual inertia fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0025] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with specific examples.

[0026] An indoor robot motion estimation method based on deep learning and visual-inertial fusion, such as figure 1 As shown, its specific steps are as follows:

[0027] Step 1. Use the monocular camera to acquire the visual data (ie visual image) of the robot.

[0028] Step 2, using the 3D projection geometry training model to extract and track the features of the visual image, and select key frames at the same time.

[0029] (1) Use CNN and LSTM to form a deep learning network, that is, the 3D projection geometry training model, which is a 3D projection geometry training network for generating key points and descriptors.

[0030] 3D projective geometry training models such as figure 2 As shown, the upper part is a CNN (Convolutional Neural Network), which is a 50-la...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an indoor robot motion estimation method based on deep learning and visual inertia fusion. A deep learning method is used to carry out feature extraction on visual data througha designed GCN network. Visual inertial navigation information is fused, and a robust SLAM system is constructed by the method. The robustness of the system can be greatly improved. Rapid deploymentcan also be carried out on embedded equipment with limited computing power. Through performing Pre-integration processing on inertia information, inter-frame constraints of visual information are formed. A joint optimizer is adopted to perform fusion optimization on the visual inertia output data. Compared with a single visual odometer system, the visual inertial odometer system based on pre-integration has higher positioning precision, inertial information is effectively used in the odometer system based on a pre-integration algorithm, noise propagation of the system is inhibited, and the influence of inertial null drift on the odometer positioning precision is reduced.

Description

technical field [0001] The invention relates to the technical field of motion estimation, in particular to an indoor robot motion estimation method based on deep learning and visual-inertial fusion. Background technique [0002] Intelligent robots generally sense the surrounding environment by installing sensing devices, and realize their own positioning and mapping based on the information obtained by the sensing devices, and then through subsequent planning paths, they finally reach the destination safely and reliably. Real-time and accurate motion estimation is the basis for the intelligence of mobile robots and the prerequisite for ensuring that the robot can complete autonomous behavior. With the continuous development of vision technology, visual odometry technology has been widely used, such as unmanned aerial vehicle, unmanned driving, factory AGV, etc., and it is also more and more used in the autonomous positioning and motion estimation of intelligent robots. [0...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/20G06T7/70G06T7/80
CPCG06T7/20G06T7/70G06T7/80G06T2207/20081G06T2207/20084
Inventor 李海标时君韦德流
Owner GUILIN UNIV OF ELECTRONIC TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products