Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

Three-dimensional video depth map coding method based on just distinguishable parallax error estimation

A technology of depth map coding and 3D video, which is applied in the field of 3D video depth map coding based on just identifiable parallax error estimation, can solve problems such as distortion and inability to guarantee rate-distortion performance, and achieve the effect of reducing bit rate and improving subjective quality

Active Publication Date: 2014-05-28
ZHEJIANG UNIV
View PDF3 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Therefore, it is inappropriate to set a uniform threshold T for all pixels in the synthetic virtual view. Although DCR can make the objective distortion value (such as MSE) of all pixels in the synthetic view smaller than T, it may still produce many Severe distortions perceived by the human eye
In addition, the strategy of minimizing the residual energy adopted by the DCR method does not guarantee the optimal rate-distortion performance

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Three-dimensional video depth map coding method based on just distinguishable parallax error estimation
  • Three-dimensional video depth map coding method based on just distinguishable parallax error estimation
  • Three-dimensional video depth map coding method based on just distinguishable parallax error estimation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062] A 3D video depth map coding method based on just-identifiable parallax error estimation, comprising the following steps;

[0063] (1) Input a frame of 3D video depth map and the texture image corresponding to the 3D video depth map;

[0064] (2) Synthesize the texture image of the virtual viewpoint through the 3D video depth map and its corresponding texture image;

[0065] Steps (1) and (2) are performed using existing technologies. For each frame of 3D video, there are depth maps and texture images corresponding to each other.

[0066] (3) Calculating the just identifiable error map of the texture image of the virtual viewpoint, which specifically includes the following steps:

[0067] 3-1. According to the response characteristics of the human visual system, calculate the background brightness masking effect value T at each pixel (x, y) in the texture image of the virtual viewpoint l (x,y);

[0068] T l ( ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a three-dimensional video depth map coding method based on just distinguishable parallax error estimation. The method comprises the following steps: (1), inputting a frame of three-dimensional video depth map and a corresponding texture image; (2), synthesizing the texture image of a virtual view point; (3), calculating the just distinguishable error graph of the texture image of the virtual view point; (4), calculating the scope of a just distinguishable parallax error of the three-dimensional video depth map; (5), performing intraframe and interframe prediction on the three-dimensional video depth map, and selecting a prediction mode, which is provided with minimum prediction residual error energy, of the three-dimensional video depth map; (6), performing prediction residual error adjusting on the three-dimensional video depth map, and obtaining a prediction residual error block, which has a minimum variance, of the three-dimensional video depth map; and (7), encoding the three-dimensional video depth map of a current frame. The method provided by the invention, can greatly reduce the code rate of depth map coding under the condition that the invariability of the PSNR of a virtual synthesis video image is ensured and can also substantially improve the subjective quality of a virtual synthesis viewpoint at the same time.

Description

technical field [0001] The invention relates to the field of three-dimensional video coding, in particular to a three-dimensional video depth map coding method based on just identifiable parallax error estimation. Background technique [0002] In the past more than a century, the continuous improvement of human requirements for visual perception has promoted the continuous development of image and video technology, from the original black and white silent film to the digital high-definition video (HDTV) technology that is widely used today. It can be said that today's Video technology has brought people a very superior visual viewing experience. [0003] Even so, people's requirements for visual experience have not been fully met. With the continuous development of computer and information technology, people put forward higher requirements for visual experience, hoping to obtain more realistic visual effects while watching videos, that is to say, viewers pursue a "immersive...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/625H04N19/103H04N19/12
Inventor 田翔罗雷陈耀武
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products