Visual SLAM front-end pose estimation method based on deep learning

A deep learning and vision technology, applied in the field of visual navigation, can solve problems such as poor universality, difficult feature detection, and poor robustness, and achieve strong robustness

Active Publication Date: 2020-05-08
NO 20 RES INST OF CHINA ELECTRONICS TECH GRP
View PDF3 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Solve the technical problems of poor universality in the existing visual pose estimation realized by the learning method, and the technica...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual SLAM front-end pose estimation method based on deep learning
  • Visual SLAM front-end pose estimation method based on deep learning
  • Visual SLAM front-end pose estimation method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0051] The present invention will be further described below in conjunction with the accompanying drawings and embodiments.

[0052] After receiving continuous image frame input in an end-to-end manner, the pose transformation between frames is estimated in real time. First, data preprocessing is performed on the original data set, including data expansion and data conversion, and then the Brox network is constructed to extract dense optical flow from the input continuous frame images; the extracted optical flow map is divided into two networks for feature extraction, One branch uses global information to extract high-dimensional features, and the other branch divides the optical flow map into 4 sub-images, which are respectively down-sampled to obtain image features; and finally, the features obtained by the training of the two branches are fused. In the last cascaded fully connected network, pose estimation is performed to obtain the pose between two adjacent frames. The ne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a visual SLAM front-end pose estimation method based on deep learning. The visual SLAM front-end pose estimation method is used for estimating inter-frame pose transformation inreal time. The method comprises the following steps: firstly, carrying out data preprocessing on an original data set, and then constructing a Brox network to carry out dense optical flow extractionon input continuous frame images; carrying out feature extraction on the extracted optical flow graph by two networks, extracting high-dimensional features from one branch by adopting global information, dividing the optical flow graph into four sub-images at the other branch, and respectively carrying out down-sampling to obtain image features; and finally, fusing the features obtained by training the two branches, and carrying out pose estimation on the final cascade full-connection network to obtain a pose between two adjacent frames. According to the method, the problem of real scale estimation in monocular vision is solved, camera motion and proportion information can be extracted by using global information and local information, and the learning ability and intelligent level of therobot are improved.

Description

technical field [0001] The invention relates to the field of visual navigation, in particular to a visual SLAM front-end pose estimation method. After receiving continuous image frame input in an end-to-end manner, the pose transformation between frames is estimated in real time, which can provide a highly robust visual SLAM method based on deep learning for UAVs. Background technique [0002] Simultaneous Localization and Mapping (SLAM) is a technology in which intelligent bodies such as drones carry their sensors to establish a map of the surrounding environment during movement and perform their own positioning according to the established environmental map. When UAVs work in some special environments, they are susceptible to environmental interference and make the GPS signal weak or completely invalid. In order to make up for the lack of GPS-based navigation systems for UAVs, SLAM can also be used in environments where GPS cannot be used normally. As an effective alterna...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T7/73G06N3/04
CPCG06T7/73G06T2207/20081G06T2207/20084G06T2207/30241G06N3/045
Inventor 高嘉瑜李斌李阳景鑫
Owner NO 20 RES INST OF CHINA ELECTRONICS TECH GRP
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products