End-to-end visual positioning method and system

A visual positioning and pose technology, applied in the field of end-to-end visual positioning methods and systems, can solve the problems of difficult positioning data collection, insignificant improvement in accuracy, limited positioning accuracy, and sparsity of training data.

Pending Publication Date: 2021-02-02
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although the end-to-end method can overcome some shortcomings of the geometric method, its localization accuracy is limited by the sparsity of the training data
Because the acquisition of positioning data is relatively difficult, the poses in the training database of the end-to-end model usually only contain a small part of the positioning space, which makes the network very easy to overfit during the training process.
Most of the early work focused on designing new network structures or loss functions to improve the generalization ability of the network, but the accuracy improvement is not obvious

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • End-to-end visual positioning method and system
  • End-to-end visual positioning method and system
  • End-to-end visual positioning method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0081] Preferred embodiments of the present invention are described below with reference to the accompanying drawings. Those skilled in the art should understand that these embodiments are only used to explain the technical principles of the present invention, and are not intended to limit the protection scope of the present invention.

[0082] The purpose of the present invention is to provide an end-to-end visual positioning method, based on the deep convolutional neural network DepthCNN, predict the corresponding depth map according to the source image, and determine the composite image through the back projection method; then determine the pose regression network model, Realize end-to-end visual positioning, can accurately determine the absolute pose of the image to be tested, and improve positioning accuracy.

[0083] In order to make the above objects, features and advantages of the present invention more comprehensible, the present invention will be further described in...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to an end-to-end visual positioning method and system, and the method comprises the steps: obtaining a training data set which comprises a plurality of frames of continuous source images,establishing a pose regression network model according to each continuous source image,the method specifically comprises the following steps: for each source image, predicting a correspondingdepth map according to the source image based on a deep convolutional neural network, determining a composite image through a back projection method according to the internal parameters and the depthmap of the camera, determining a pose regression network model according to each source image and the corresponding composite image, and based on the pose regression network model, obtaining an absolute pose of the to-be-measured image according to the to-be-measured image. The method is based on a deep convolutional neural network Depth CNN, a corresponding depth map is predicted according to asource image, and a synthetic image is determined through a back projection method, thus, the pose regression network model is determined, end-to-end visual positioning is realized, the absolute poseof the to-be-measured image can be accurately determined, and the positioning precision is improved.

Description

technical field [0001] The present invention relates to the field of computer vision technology and SLAM (Simultaneous localization and mapping, synchronous positioning and mapping), in particular to an end-to-end visual positioning method and system based on an online geometric data augmentation strategy. Background technique [0002] Visual localization is an important link in mobile robots, automatic driving and augmented reality, which refers to estimating the pose of a camera through images. [0003] The current mainstream visual localization algorithm is based on geometry. Given a captured image, it is first necessary to use the SFM (structure-from-motion) algorithm to perform 3D reconstruction of the scene, and the reconstructed 3D model points will be assigned one or more feature descriptors. When a query image is given, the feature points are first extracted and the descriptor is calculated, and then the 3D point that is most similar to the feature point is searche...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/70G06T11/00
CPCG06T7/70G06T11/006G06T2207/10028G06T2207/20081G06T2207/20084G06T2207/30252
Inventor 高伟万一鸣吴毅红
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products