An Approach to Visual Odometry Based on End-to-End Semi-Supervised Generative Adversarial Networks

A visual odometry and semi-supervised technology, applied in the field of visual odometry, which can solve the problems of limited usage scenarios, limited accuracy, and difficulty in obtaining data marked with geometric information.

Active Publication Date: 2021-11-05
XIAMEN UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, this supervised learning method has the following difficulties: First, it is extremely difficult to obtain data with a large amount of geometric information annotations. One hundred thousand images are tens of millions or even hundreds of millions of points, so it is very difficult to label such a labor-intensive
Therefore, some researchers, such as Kendall et al. [2] Using Visual SFM with Transfer Learning [3] Marking the image pose saves a lot of time and labor costs, but the accuracy is also limited by the Visual SFM algorithm
Secondly, if you only learn the pose of a single image corresponding to the current reference coordinate system, then the usage scenarios will be greatly limited

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An Approach to Visual Odometry Based on End-to-End Semi-Supervised Generative Adversarial Networks
  • An Approach to Visual Odometry Based on End-to-End Semi-Supervised Generative Adversarial Networks
  • An Approach to Visual Odometry Based on End-to-End Semi-Supervised Generative Adversarial Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0032] The following examples will be further described with reference to the accompanying drawings.

[0033] Embodiment of the present invention comprises the steps of:

[0034] 1) Construction of feature generation network, the specific method is: the depth feature point detection and descriptor extraction process, the feature points and the descriptors are considered separately, simultaneously generated feature points and the depth of the descriptor with the generated network, and can be excellent in the speed to SIFT algorithm, wherein generating a network into the feature point detector and descriptor extraction depth features two functions, the RGB image as an input, the pixel-level probability map feature points and feature descriptors generated by the depth encoder and decoder; the feature point detector design, taking computational efficiency and real-time, so the first network can be run on the computing system SLAM complex, especially for resource-constrained computing ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A method of visual odometry based on an end-to-end semi-supervised generative adversarial network, involving a visual odometry algorithm. Build a feature generation network; build a discriminant network; confront training; solve dynamic scene problems. The SIFT feature algorithm is used to annotate point position information and extract feature descriptors, and then use the randomly generated homography matrix to obtain relevant frames and match feature points to generate corresponding training labels. The generation network can input the original image to generate the corresponding feature point position and corresponding depth description. The discriminative network combines the semantic-geometric consistency loss function, the feature point cross-entropy loss function and the discriminative loss function to compete with the generative network. Through the training of GAN, the generative network can generate point position information and depth description that make the discriminative network indistinguishable, thus avoiding manual design of local features.

Description

Technical field [0001] The present invention relates to visual odometry calculation method, particularly to a method based on the generated end against the semi-supervised network the visual odometry. Background technique [0002] In the past few decades, mobile robots and autonomous driving technology field attracted wide attention from researchers worldwide, we have made significant progress and breakthroughs. At present, the mobile robot can autonomously perform complex tasks, such as research and development engineering company Boston Dynamics robot has been able to imitate human make backflips, open the door and other activities. Autopilot technology has also made a major breakthrough, automatic driving car is expected to mass production within two years. Whether mobile robots or automated driving technology needs in a complex and dynamic environment, indoor or outdoor automatic navigation. To enable autonomous navigation, navigation carriers need to locate itself in the sur...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06T17/00G06T7/246G06T7/73G06N3/04G01C21/20G01C22/00
CPCG06T17/00G06T7/246G06T7/73G01C21/20G01C22/00G06N3/045
Inventor 纪荣嵘郭锋陈晗
Owner XIAMEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products