Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for calculating monocular video depth map

A calculation method and depth map technology, applied in the field of computer vision, to achieve the effect of improving efficiency, improving efficiency, and guaranteeing results

Active Publication Date: 2017-12-15
HUAZHONG UNIV OF SCI & TECH
View PDF5 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The method of the technical solution of the present invention, aiming at the situation that the existing supervised learning training samples are not easy to obtain, uses the method guaranteed by the physical mechanism to calculate the depth value of the monocular video; at the same time, it introduces dense optical flow for matching, which can be calculated more accurately Depth map of the video frame

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for calculating monocular video depth map
  • Method for calculating monocular video depth map
  • Method for calculating monocular video depth map

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0070] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other. The present invention will be further described in detail below in combination with specific embodiments.

[0071] The monocular video depth estimation method based on SFM and dense optical flow provided by the embodiment of the technical solution of the present invention, its process is as follows figure 1 As shown, it includes: decomposing the video to be restored into pictures by frame; extrac...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a method for calculating a monocular video depth map. The method is characterized by comprising a step of decomposing a video to be recovered into frames according to a frame, a step of extracting picture feature points of each frame, a step of matching the feature points and forming a feature point trajectory, a step of calculating a global rotation matrix and a translation vector, a step of optimizing camera parameters, a step of calculating dense optical flow of a selected frame, and a step of calculating a depth value of the selected frame to obtain a depth map. According to the method of the technical scheme, a depth estimation method of surface from motion (SFM) based on a physical mechanism is used, and a dense optical flow is used for matching. According to the method, no training sample is needed, optimization modes of segmentation, plane fitting and the like are not used, and the calculation quantity is small. At the same time, according to the method, a problem that the depth values of all pixels can not be obtained especially in a texture-free area in a process from sparse reconstruction to dense reconstruction in the prior art is solved, while the calculation efficiency is improved, the accuracy of the depth map is ensured.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a calculation method for a monocular video depth map. Background technique [0002] With the development of science and technology, 3D movies and virtual reality are enriching people's lives. However, whether it is the 3D movies that have become popular all over the world or the virtual reality that is currently in the ascendant, there is a serious problem, that is, the lack of 3D resources at present. In the prior art, the depth is mainly predicted through monocular video, and binocular stereoscopic video is obtained through viewpoint synthesis, which is also the main method to solve the current shortage of 3D resources. [0003] In this technical approach, depth estimation of monocular video has received extensive attention as an important component. At present, the mainstream monocular depth prediction methods include depth estimation methods based on deep learning...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T7/55G06T7/73
CPCG06T7/55G06T7/73G06T2207/10016G06T2207/30241
Inventor 曹治国张润泽肖阳鲜可杨佳琪李然赵富荣李睿博
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products