Single-viewpoint video depth obtaining method based on scene classification and geometric dimension

A technology of geometric annotation and acquisition method, applied in the field of single-view video depth acquisition, to achieve the effect of good effect, low noise and moderate calculation amount

Inactive Publication Date: 2015-11-25
SHANDONG UNIV
View PDF6 Cites 12 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

For these scenes, depth cues such as motion, focus, defocus, linear perspective, atmospheric perspective, texture information, or a combination of depth cues can

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Single-viewpoint video depth obtaining method based on scene classification and geometric dimension
  • Single-viewpoint video depth obtaining method based on scene classification and geometric dimension
  • Single-viewpoint video depth obtaining method based on scene classification and geometric dimension

Examples

Experimental program
Comparison scheme
Effect test

Example Embodiment

[0064] Example 1

[0065] A single-view video depth acquisition method based on scene classification and geometric annotation, the specific steps include:

[0066] (1) Read the video sequence, use the optical flow method to perform motion estimation on the adjacent frame images in the video sequence, obtain the optical flow motion vector result, and judge whether the current frame image belongs to the camera still object motion scene or the camera according to the optical flow motion vector result. A motion scene, the camera motion scene includes a still scene of a camera moving object and a motion scene of a camera moving object;

[0067] (2) Determine whether it is necessary to estimate the initial depth map of the current frame image, if necessary, go to step (3), otherwise, the initial depth map of the current frame image defaults to the initial depth map of the previous frame image of the current frame image, and directly enter step (4);

[0068] (3) Obtain the initial ...

Example Embodiment

[0070] Example 2

[0071] A single-view video depth acquisition method based on scene classification and geometric annotation, the specific steps include:

[0072] (1) Read the highway video downloaded from the ChangeDetection website, use the optical flow method to estimate the motion of adjacent frame images in the video sequence, and obtain the optical flow motion vector result. According to the optical flow motion vector result, it is judged that the 8th frame image belongs to the camera still object The motion scene still belongs to the camera motion scene, and the camera motion scene includes the still scene of the camera moving object and the motion scene of the camera moving object; the specific steps include:

[0073] a. Read the highway video, figure 2 Screenshot for highway video. Obtain all the images, find the optical flow motion vector results between adjacent frame images, and then gather the optical flow motion vector results of the first 7 frames of the 8th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a single-viewpoint video depth obtaining method based on scene classification and geometric dimension. The single-viewpoint video depth obtaining method specifically comprises the following steps: (1) judging whether a current frame image belongs to a scene in which a camera is static and an object moves or a camera moving scene; (2) judging whether needing to estimate an initial depth graph of the current frame image; (3) figuring out the initial depth graph of the current frame image; and (4) for the scene in which the camera is static and the object moves, obtaining a motion depth graph of the current frame image and fusing the motion depth graph of the current frame image with the initial depth graph; for the camera moving scene, carrying out global motion compensation, carrying out motion estimation on adjacent frames of images after the global motion compensation by using an optical flow method, judging whether a moving object exists, and determining whether needing to fuse with the initial depth graph. By adopting the single-viewpoint video depth obtaining method provided by the invention, no specific scene is relied on, the calculated amount is appropriate, the generated noise is small, a depth graph better conforming to actual scene distribution is obtained, and a 3D video with a better effect is synthesized.

Description

technical field [0001] The invention relates to a single-view video depth acquisition method based on scene classification and geometric annotation, and belongs to the technical field of computer image processing. Background technique [0002] At present, stereoscopic image technology has a wide range of applications and is distributed in various fields such as scientific research, military, education and medical treatment. Compared with 2D images, stereoscopic images bring us a more realistic and shocking visual enjoyment. At present, there are several ways to obtain 3D film source: depth camera, 2D to 3D technology. However, the depth camera is very expensive and can only get the 3D content of the newly shot video, which is not practical in the 3DTV system. An effective way to solve this problem is the 2D to 3D technology, because there are a large number of 2D video, 2D to 3D conversion technology. 3D technology has a very good development prospect. 2D to 3D technology...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N13/00G06T7/00
Inventor 江铭炎徐慧慧
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products