Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for evaluating video quality from virtual viewpoint

A technology of video quality and evaluation method, applied in the field of virtual viewpoint video quality evaluation, which can solve the problems of only considering the distortion of the drawing process, the calculation is not comprehensive enough, and the time domain flicker distortion of the virtual viewpoint video is not considered.

Active Publication Date: 2018-04-20
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF8 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0009] The above-mentioned virtual viewpoint video quality evaluation scheme has the following disadvantages: the temporal flicker distortion in the virtual viewpoint video is not considered
However, the existing virtual viewpoint video quality evaluation methods are often not comprehensive enough, some only consider the influence of depth map distortion, and some only consider the distortion caused by the rendering process itself

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for evaluating video quality from virtual viewpoint
  • Method for evaluating video quality from virtual viewpoint
  • Method for evaluating video quality from virtual viewpoint

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] A method for evaluating the quality of virtual viewpoint video, including the following steps:

[0057] Step S110: The original reference video and the video to be evaluated are divided into space-time domain units respectively.

[0058] Step S110 includes:

[0059] ① Divide the original reference video and the video to be evaluated into groups of images composed of consecutive frames in the time domain.

[0060] ② Divide each image in the image group into several image blocks, and the consecutive image blocks in the time domain form space-time domain units.

[0061] In this embodiment, the space-time domain unit is composed of several time-domain continuous image blocks with the same spatial position.

[0062] Specifically, it is first necessary to divide the original reference video and the video to be evaluated into space-time domain units. The schematic diagram of the process is as follows: figure 2 Shown. The video sequence (the original reference video and the video to be ...

Embodiment 2

[0121] Step S110: The original reference video and the video to be evaluated are divided into space-time domain units respectively.

[0122] Step S110 includes:

[0123] ① Divide the original reference video and the video to be evaluated into groups of images composed of consecutive frames in the time domain.

[0124] ② Divide each image in the image group into several image blocks, and the consecutive image blocks in the time domain form space-time domain units.

[0125] The space-time domain unit is composed of several time-domain continuous image blocks with different spatial positions describing the motion trajectory of the same object.

[0126] Specifically, it is first necessary to divide the original reference video and the video to be evaluated into space-time domain units. The schematic diagram of the process is as follows: image 3 Shown. The video sequence (the original reference video and the video to be evaluated) is divided into several image groups, and each image group ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a virtual viewpoint video quality evaluation method. According to the invention, time domain blink distortion of virtual viewpoint video is calculated by taking all pixels in a space-time domain unit as a unit, so that erroneous estimation which is brought about by a pixel-to-pixel time domain blink distortion calculation mode for virtual viewpoint video human eye perception distortion is avoided. The virtual viewpoint video quality evaluation method not only considers distortion brought about by a depth image error but also considers distortion introduced by left and right viewpoint texture images when performing calculation on the time domain blink distortion of the virtual viewpoint video. The virtual viewpoint video quality evaluation method can effectively evaluate the time domain blink distortion which imposes great influences on the subjective quality in the virtual viewpoint video, thereby more conforming to a result of human eye subjective perception when the virtual viewpoint video quality is evaluated, and enabling virtual viewpoint video quality evaluation to be more accurate and more comprehensive.

Description

Technical field [0001] The present invention relates to video quality evaluation technology, in particular to an accurate and comprehensive virtual viewpoint video quality evaluation method. Background technique [0002] With the development of 3D video technology, more and more movies and TV programs have begun to use 3D technology to shoot, and various 3D displays and 3D TVs are gradually popularized. The foreseeable trend is that 3D video will become the mainstream in the future. [0003] At present, the International Standards Organization MPEG (Moving Picture Experts Group) and ITU-T VCEG (International Telecommunication Union-Telecommunication Standardization Sector Video Coding Experts Group) have jointly formulated the in-depth information 3D video coding standard. In the standard 3D video solution, the encoder only needs to encode and transmit 2 to 3 color texture videos and the corresponding depth map videos, and the decoder can use the received two texture videos and th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N17/00H04N19/154
Inventor 张云刘祥凯
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products