Visual odometer method based on end-to-end semi-supervised generative adversarial network

A visual odometry and semi-supervised technology, applied in the field of visual odometry calculation method, can solve the problems of limited usage scenarios, limited precision, and difficulty in obtaining data marked with geometric information, so as to avoid local features and improve matching accuracy.

Active Publication Date: 2019-10-15
XIAMEN UNIV
View PDF4 Cites 26 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, this supervised learning method has the following difficulties: First, it is extremely difficult to obtain data with a large amount of geometric information annotations. One hundred thousand images are tens of millions or even hundreds of millions of points, so it is very difficult to label such a labor-intensive
Therefore, some researchers, such as Kendall et al. [2] Using Visual SFM with Transfer Learning [3] Marking the image pose saves a lot of time and labor costs, but the accuracy is also limited by the Visual SFM algorithm
Secondly, if you only learn the pose of a single image corresponding to the current reference coordinate system, then the usage scenarios will be greatly limited

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Visual odometer method based on end-to-end semi-supervised generative adversarial network
  • Visual odometer method based on end-to-end semi-supervised generative adversarial network
  • Visual odometer method based on end-to-end semi-supervised generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The following embodiments will further illustrate the present invention in conjunction with the accompanying drawings.

[0033] Embodiments of the present invention include the following steps:

[0034] 1) Construct a feature generation network. The specific method is: in the deep feature point detection and descriptor extraction method, the feature point and the descriptor are considered separately, and the feature point and the depth descriptor are generated simultaneously by using the generation network, and the speed can be optimized. Based on the SIFT operator, the feature generation network is divided into two functions: the feature point detector and the depth feature descriptor extraction. The RGB image is used as input, and the pixel-level feature point probability map and depth feature descriptor are generated through the encoder and decoder; When the feature point detector is designed, calculation efficiency and real-time performance are considered, so the ne...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a visual odometer method based on an end-to-end semi-supervised generative adversarial network, and relates to a visual odometer calculation method. The method comprises: constructing a feature generation network; constructing a discrimination network; performing adversarial training; solving dynamic scene problems. The method specifically comprises: marking point positioninformation and extracting feature descriptors through an SIFT feature algorithm, and obtaining related frames and matching feature points by randomly generating a homography matrix so as to generatecorresponding training labels; the generation network can input an original image to generate corresponding feature point positions and corresponding depth descriptions; and the discriminant network combining the semantic-geometric consistency loss function, the feature point cross entropy loss function and the discriminant loss function to form an adversarial with the generative network. Throughtraining of the GAN, the generation network can generate point position information and depth description which cannot be distinguished by the discrimination network, so that manual design of local features is avoided.

Description

technical field [0001] The invention relates to a visual odometry calculation method, in particular to a visual odometry method based on an end-to-end semi-supervised generation confrontation network. Background technique [0002] In the past few decades, the field of mobile robotics and autonomous driving technology has attracted extensive attention from researchers all over the world, and significant progress and breakthroughs have been achieved. At present, mobile robots can perform complex tasks autonomously. For example, the robot dog developed by Boston Dynamics Engineering Company has been able to imitate humans to do backflips and open doors. Major breakthroughs have also been made in autonomous driving technology, and autonomous vehicles are expected to be mass-produced within two years. Both mobile robots and self-driving technologies require autonomous navigation in complex and dynamic indoor or outdoor environments. In order to be able to navigate autonomously,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T17/00G06T7/246G06T7/73G06N3/04G01C21/20G01C22/00
CPCG06T17/00G06T7/246G06T7/73G01C21/20G01C22/00G06N3/045
Inventor 纪荣嵘郭锋陈晗
Owner XIAMEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products