Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A monocular vision odometer method adopting deep learning and mixed pose estimation

A technology of pose estimation and monocular vision, which is applied in computing, photo interpretation, image data processing, etc., can solve the problems of not using the advantages of geometric theory, and the generalization ability needs to be improved, so as to achieve good robustness and accurate positioning. The effect of pose estimation results

Pending Publication Date: 2020-11-06
HARBIN ENG UNIV
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Similar to the first method, this method still only relies on the neural network to estimate the pose state, without taking advantage of the advantages of geometric theory in pose estimation. The generalization ability of this method needs to be improved.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A monocular vision odometer method adopting deep learning and mixed pose estimation
  • A monocular vision odometer method adopting deep learning and mixed pose estimation
  • A monocular vision odometer method adopting deep learning and mixed pose estimation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The specific embodiments of the present invention will be further described below in conjunction with the accompanying drawings.

[0044] This method uses two deep learning networks: one is called dense optical flow network, which is used to extract the dense optical flow field between adjacent images, and the other is called dense deep network, which is used to extract the Dense depth field; key point matching pairs are obtained from the optical flow field, and the key point matching pairs are input into the hybrid 2d-2d and 3d-2d pose estimation algorithm to obtain relative pose information.

[0045] to combine figure 1 , the realization process of the monocular visual odometer method of the present invention is:

[0046] Step 1. In a set of image sequences, adjacent images are grouped in pairs to form an image pair, which is iteratively input into the monocular visual odometry, and the dense optical flow network is used to estimate the density between each group of ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monocular vision odometer method adopting deep learning and mixed pose estimation. The monocular vision odometer method comprises the following steps: estimating an optical flow field between continuous images by utilizing a deep learning neural network, and extracting key point matching pairs from the optical flow field; taking key point matching pairs as input, and according to a 2d-2d pose estimation principle, calculating a rotation matrix and a translation vector preliminarily by using an epipolar geometry method. A monocular image depth field is estimated by using a deep neural network, a geometric theory triangulation method is combined, and the depth field serves as a reference value, thus calculating by using an RANSAC algorithm to obtain an absolute scale, and converting the pose from a normalized coordinate system to a real coordinate system; and when the 2d-2d pose estimation fails or the absolute scale estimation fails, performing pose estimationby using a PnP algorithm by using a 3d-2d pose estimation principle. According to the invention, accurate pose estimation and absolute scale estimation can be obtained, the robustness is good, and thecamera track can be well reproduced in different scene environments.

Description

technical field [0001] The invention relates to a monocular visual odometer method, in particular to a monocular visual odometer method using deep learning and hybrid pose estimation, and belongs to the technical field of simultaneous localization and mapping (SLAM). Background technique [0002] At the same time, positioning and map construction are mainly used in the fields of robots, drones, unmanned driving, augmented reality, virtual reality, etc. Localization and mapping problems while moving in unknown environments. As one of the core components of simultaneous positioning and map construction, visual odometry can locate the position of the robot itself in the environment and estimate the relative motion state information of 6 degrees of freedom, including displacement information of 3 degrees of freedom and rotation of 3 degrees of freedom Information, and then calculate the absolute position information through the relative motion information, and then reproduce th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06T7/246G06T7/73G01C22/00G01C11/04
CPCG06T7/246G06T7/73G01C22/00G01C11/04G06T2207/10016G06T2207/20016G06T2207/20081G06T2207/20084G06T2207/30241Y02T10/40
Inventor 王宏健班喜程李娟李庆肖瑶汤扬华韩宇辰刘越
Owner HARBIN ENG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products