Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Supervised data driving-based monocular video depth estimating method

A data-driven, depth estimation technology, applied in the field of pattern recognition, can solve problems such as unsatisfactory users, poor spatio-temporal consistency, etc., and achieve good visual effects and strong generalization effects

Active Publication Date: 2017-04-26
HUAZHONG UNIV OF SCI & TECH
View PDF3 Cites 27 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Benefits of technology

This new technology allows us to estimate 360 degrees from 2D images without relying solely upon ground truth values like distance or camera angle measurements. It uses both spatial and temporal variables together to make accurate estimations even when there may have errors due to uneven surfaces such as rocks or water waves that could affect how well an image looks at it.

Problems solved by technology

This patented describes two different ways to improve depth estimate performance when making sense of three dimensional videos. One way involves analyzing existing techniques such as frame rate conversion structures and depth estimation algorithms like those described earlier. Another solution proposes utilizing advanced technologies like Deep Learning to enhance depth estimates without relying heavily on predefined models.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Supervised data driving-based monocular video depth estimating method
  • Supervised data driving-based monocular video depth estimating method
  • Supervised data driving-based monocular video depth estimating method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0042] Based on the supervised data-driven monocular video depth estimation method provided by the present invention, the process is as follows figure 1 As shown, including obtaining the training data set, constructing the network model, segmenting the training data set and extracting features, using the training data to train network parameters, segmenting the data to be estimated and extractin...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a supervised data driving-based monocular video depth estimating method comprising the following steps: (1) a sample video sequence and a corresponding depth sequence are obtained as a training data set; (2) a tracking-based superpixel segmentation method is used for segmenting the training data set, and characteristics of each segmented unit are extracted; (3) a network model combining a convolution nerve network and a space-time condition random field; (4) the training data set, the segmentation result and the corresponding characteristics are used for training a depth space-time convolution nerve network field model; (5) a video sequence to be estimated is segmented, and characteristics of each segmented unit are extracted; (6) the video sequence to be estimated, the segmentation result and the corresponding characteristics are input into the trained model, and a depth sequence can be obtained. The supervised data driving-based monocular video depth estimating method is based on consideration on space-time consistency and hierarchical relation accuracy, and monocular stereoscopic video quality can be improved.

Description

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products