Virtual multi-point video generation method

A multi-viewpoint video and video technology, applied in image communication, electrical components, stereoscopic systems, etc., can solve the problems of limited viewing position and uncomfortable viewing of two-viewpoint 3D video, achieve low cost, improve accuracy, and improve stability sexual effect

Active Publication Date: 2017-12-19
ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY +1
View PDF6 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The invention aims to solve technical problems such as limited viewing position of two-viewpoint 3D video, uncomfortable viewing, etc., thereby providing a virtual multi-viewpoint video generation method, which ...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Virtual multi-point video generation method
  • Virtual multi-point video generation method
  • Virtual multi-point video generation method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0041] A virtual multi-viewpoint video generation method, the steps are as follows: Step S1, acquire two-viewpoint 3D video, and decode to obtain several corresponding single-frame images.

[0042] Step S2, segmenting each single frame image to obtain a left viewpoint view and a right viewpoint view.

[0043] Step S3, performing preprocessing on each left-viewpoint view and corresponding right-viewpoint view in step S2.

[0044] S3.1, establish a convolution template. The convolution template is an odd-sized elliptical template f 1 ; The ellipse template f 1 for

[0045] S3.2, the ellipse template f 1 Convolved with each left-viewpoint view to obtain the preliminary processed left-viewpoint view I L .

[0046] S3.3, set the ellipse template f 1 The size of the window, and the initial processing of the I L Each pixel of the ellipse template f 1 Calculate the window mean value based on the center of the window, and use the window mean value to replace the single-point ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a virtual multi-point video generation method. The method comprises the following steps: step S1, acquiring a single-frame image; step S2, obtaining a left point view and a right point view; step S2, preprocessing, step S4, acquiring a search dense point initial parallax; step S5, searching a matching point, and computing the parallax of each pair of matching points; step S6, performing linear interpolation to matching point parallax with high resolution; step S7, obtaining a virtual multi-point image; and step S8, compressing the virtual multi-point image as a video format, sending to a video memory to display. By using the generation method disclosed by the invention, the dense parallax data needs to acquire, so that a mean value is used for replacing the single-point data to improve the accuracy of the parallax searching, and the algorithm stability is improved; the virtual other point parallaxes are computed according to a light rectilinear propagation principle of the geometrical optical, and then the virtual multi-point image is generated through the virtual parallax. By using the existing two-point 3D video, the images of other points can be obtained through a digital computing way, thereby realizing the multi-point video manufacturing with low cost.

Description

technical field [0001] The invention belongs to the technical field of video production, and in particular relates to a method for generating virtual multi-viewpoint video. Background technique [0002] The human eye has the physiological characteristics of binoculars, which enables people to have a three-dimensional perception of space objects. Compared with the common single-viewpoint 2D video, 3D video can bring people an immersive three-dimensional experience. Traditional 3D video observation needs to use the principle of polarized light or red and blue light to transmit polarized or red and blue video information with parallax to the left and right eyes respectively, and the brain synthesizes a stereoscopic image. Observing this kind of 3D video must be equipped with polarized glasses or red and blue glasses, which makes people feel a little uncomfortable. Glasses-free 3D video (or autostereoscopic video) uses the parallax barrier or the principle of optical refractio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N13/02
CPCH04N13/282
Inventor 马建荣贺振东刘洁王才东陈鹿民王玉川马斌智李浚源郑晓路李江王平李军李国敏
Owner ZHENGZHOU UNIVERSITY OF LIGHT INDUSTRY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products