Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Monocular Visual Odometry Method Fused with Edge Features and Deep Learning

A deep learning, monocular vision technology, applied in the field of visual odometry, can solve the problems of degradation, monocular VO cannot obtain environmental maps and robot motion, etc., to achieve the effect of easy acquisition of training data

Active Publication Date: 2020-08-14
NANJING XIAOZHUANG UNIV
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, once the distance between the camera and the baseline differs from the scale of the scene, it degenerates to the monocular distance
[0003] Different from stereo vision odometry, monocular VO cannot obtain the real-scale environment map and robot motion, so it needs to use prior knowledge or information such as camera height to estimate the absolute scale, which makes monocular VO easier to produce than stereo VO Bigger drifts, more challenging

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Monocular Visual Odometry Method Fused with Edge Features and Deep Learning
  • A Monocular Visual Odometry Method Fused with Edge Features and Deep Learning
  • A Monocular Visual Odometry Method Fused with Edge Features and Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0051] see Figure 1-10 , the embodiment of the present invention provides a technical solution: a monocular visual odometer method that combines edge features and deep learning:

[0052] 1. A monocular visual odometer that merges edge features and deep learning, is characterized in that: carry out edge enhancement algorithm design based on Canny edge detection algorithm, the image dataset after edge enhancement is used as convolutional neural network input an...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a monocular visual odometry method for fusing edge features and deep learning, and relates to the technical field of visual odometry. Since there are few features that can be extracted in an image in a low-texture motion scene, the visual odometry is in progress. The problem of missing feature data will occur during feature matching, and cause the accuracy of pose estimation to decrease. The innovation of the present invention is a monocular visual odometry method that combines edge features and deep learning. First, the edge enhancement algorithm is designed based on the Canny edge detection algorithm. The image data set after edge enhancement is used as the input of the convolutional neural network and features Extraction, the output of the convolutional neural network is then input into the cyclic neural network for calculation, and finally the entire model is output to estimate the camera pose and optimize the feature extraction. Experimental results show that the algorithm can learn more image features during model training, improve the accuracy of pose estimation, and exhibit superior performance in low-texture scenes.

Description

technical field [0001] The invention relates to the technical field of visual odometry, in particular to a monocular visual odometry method that combines edge features and deep learning. Background technique [0002] Visual odometry is a method for estimating ego-motion from input images, which is a core module of instant localization and map building systems. Since the monocular visual odometry (Visual Odometry, VO) can determine the current position according to the feed signal of the camera, it has become a hot research field in the field of computer vision. It has a wide range of applications in autonomous driving, robotics and other fields. In recent years, visual odometry from stereo cameras has achieved great progress and has been widely used due to its reliable depth map estimation capability. However, once the distance between the camera and the baseline differs from the scale of the scene, it degenerates to a monocular distance. [0003] Different from stereo vi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T7/55G06K9/62
CPCG06T7/55G06T2207/10016G06T2207/10024G06T2207/20081G06T2207/20084G06F18/241
Inventor 王燕清陈长伟赵向军石朝侠肖文洁李泳泉
Owner NANJING XIAOZHUANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products