Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

2D video three-dimensional method based on sample learning and depth image transmission

A technology of depth image and sample learning, applied in the field of video processing, can solve the problems of inability to maintain depth image boundary information, difficulty in ensuring 3D image continuity, image boundary distortion, etc., to reduce computational complexity and improve edge texture clarity. degree, the effect of internal smoothing

Inactive Publication Date: 2014-04-09
XIDIAN UNIV
View PDF1 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Although this method has a small computational complexity, it cannot maintain the boundary information of the depth image, which causes the image boundary in 3D format to be distorted and deformed.
At the same time, if the above method is directly applied to the three-dimensionalization of 2D video, a large amount of computing time is required to process each frame of image. At the same time, due to the change between two frames of video, it is difficult to ensure the temporal continuity of 3D images.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • 2D video three-dimensional method based on sample learning and depth image transmission
  • 2D video three-dimensional method based on sample learning and depth image transmission
  • 2D video three-dimensional method based on sample learning and depth image transmission

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] refer to figure 1 , the implementation steps of the present invention are as follows:

[0042] Step 1. Extract image features

[0043] 1a) Input two 2D video images I with a size of 320×240 1 and I 2 , and extract the video image I 1 Histogram eigenvectors of oriented gradients Specific steps are as follows:

[0044] (1a1) convert the video image I 1 It is divided into units with a size of 40×40, and the gradient histograms of 9 directions are counted in each unit, and four adjacent units form a block with a size of 80×80, and the gradient histograms of the four units in a block are connected to obtain The gradient histogram feature vector of the block;

[0045] (1a2) Concatenating the gradient histogram feature vectors of all blocks, the resulting video image I 1 Histogram eigenvectors of oriented gradients

[0046] 1b) Extract all color images of size 320×240 from color-depth image pair RGB-D database 1≤i≤N, N is the number of color images in the databas...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a 2D video three-dimensional method based on sample learning and depth image transmission. The problem that in an existing 2D video three-dimensional process, computation complexity is high, and applicability is poor is mainly solved. The method comprises the steps that (1) two frames of 2D video images are input; (2) the best depth values of pixel positions of the first frame of video image are obtained through the method based on sample learning; (3) the best depth values are subjected to postprocessing; (4) the depth values of pixel positions of the input second frame of video image are obtained through a depth transmission technology; and (5) through a drawing technology based on depth images, input video images and the obtained depth values of the pixel positions are combined, and left-right-format 3D video is formed. According to the method, computation complexity is low, high-quality depth images which are obvious in moving foreground, clear in edge and natural in structure can be obtained, so that 3D video which is good in stereoscopic vision effect is formed, and the method can be widely used in video processing relevant to 3D televisions.

Description

technical field [0001] The invention belongs to the technical field of video processing, and relates to a video stereoscopic method, which can be used to convert 2D video into 3D stereoscopic video. Background technique [0002] With the rapid development of 3D TV technology, people can watch more three-dimensional and realistic video programs through 3D TV, but the development of 3D TV is greatly limited due to the lack of 3D resources. In the prior art, 3D content is obtained by shooting with a 3D stereo camera, but the cost of this method is too expensive and the professional requirements are relatively high. Therefore, someone proposes to convert the existing 2D resources into a 3D stereoscopic format to make up for the shortage of 3D resources. [0003] Converting 2D resources into a 3D stereoscopic format is to stereotype 2D video, that is, to generate a 3D stereoscopic video by estimating a depth image from a video sequence and using a depth image rendering technolog...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N13/00H04N13/02
Inventor 郑喆坤焦李成王磊马晶晶马文萍侯彪
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products