A Derivation Method of Local Disparity Vector

A disparity vector and partial technology, which is applied in the field of multi-view video coding, can solve the problems that the disparity vector is not accurate enough, and the non-zero disparity vector cannot be derived, etc.

Active Publication Date: 2019-07-02
PEKING UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0017] The purpose of the present invention is to solve the problem that there are too few sources of disparity vectors in the prior art of 3D-HEVC, which may lead to the inability to derive non-zero disparity vectors based on adjacent units, and the disparity vectors derived from global information in the prior art of 3D-AVS are not accurate enough problem, and proposed a disparity vector derivation scheme for 3D multi-view video coding using local information

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Derivation Method of Local Disparity Vector
  • A Derivation Method of Local Disparity Vector
  • A Derivation Method of Local Disparity Vector

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0050] A method for deriving a disparity vector in multi-viewpoint video coding in this embodiment is specifically obtained according to the following steps:

[0051] Step 1. Select the coded areas adjacent to the left, upper left, upper, upper right, and lower left of the current prediction unit PU as five candidate areas, which are respectively recorded as a1, b2, b1, b0, and a0, as Figure 7 As shown, the size of the region is specifically taken as R×W on the left, R×H on the top, and R×W on the right; where W and H are the width and height of the current prediction block PU; R is the region coefficient, and the initial value is 1 , R M In this embodiment, it is taken as 4; the size of the adjacent area is different according to the width and height of the current prediction block PU.

[0052] Step 2, dividing the candidate area into several units of 4×4 size;

[0053] Step 3. In the five candidate areas, according to the order of left a1, upper b1, left upper b2, right u...

specific Embodiment approach 2

[0062] A method for deriving a disparity vector in multi-viewpoint video coding in this embodiment is specifically obtained according to the following steps:

[0063] Step 1. Select the coded areas adjacent to the left, upper left, upper, upper right, and lower left of the current prediction unit PU as five candidate areas, which are respectively recorded as a1, b2, b1, b0, and a0, as Figure 7 As shown, the size of the region is specifically taken as R×W on the left, R×H on the top, and R×W on the right; where W and H are the width and height of the current prediction block PU; R is the region coefficient, and the initial value is 1 , R M In this embodiment, it is taken as 4; the size of the adjacent area is different according to the width and height of the current prediction block PU.

[0064] Step 2, dividing the candidate area into several units of 4×4 size;

[0065] Step 3. In the five candidate areas, according to the order of left a1, upper b1, right upper b0, left l...

specific Embodiment approach 3

[0072] A method for deriving a disparity vector in multi-viewpoint video coding in this embodiment is specifically obtained according to the following steps:

[0073] Step 1. Select the coded areas adjacent to the left, upper left, upper, upper right, and lower left of the current prediction unit PU as five candidate areas, which are respectively recorded as a1, b2, b1, b0, and a0, as Figure 7 As shown, the size of the region is specifically taken as R×W on the left, R×H on the top, and R×W on the right; where W and H are the width and height of the current prediction block PU; R is the region coefficient, and the initial value is 1 , R M In this embodiment, it is taken as 4; the size of the adjacent area is different according to the width and height of the current prediction block PU.

[0074] Step 2, dividing the candidate area into several units of 4×4 size;

[0075] Step 3. In the five candidate areas, according to the order of left a1, upper b1, left upper b2, left lo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

A local parallax vector obtaining method in multi-view video coding, comprising: dividing the coded left, upper left, upper, upper right and lower left candidate areas adjacent to a current prediction unit PU into multiple blocks of the size of l*w; inspecting whether a parallax vector exists in all the divided blocks of the current candidate area according to a certain order to determine whether to inspect the next candidate area; if the current candidate area has no available parallax vector, extending inspection to the adjacent areas so as to obtain new candidate areas until all the divided blocks in the current candidate area have the parallax vector or the maximum set adjacent area of the candidate area is reached; and finally averaging all the parallax vectors in the l*w blocks of the current candidate area to obtain the local parallax vector of the current prediction unit. According to the method, the problem of inaccuracy caused by the fact that the parallax vector is replaced by the zero vector derived by default because the source of obtaining the parallax vector in the current 3D-HEVC is less and the problem of inaccuracy of the parallax vector derived by using global information in the 3D-AVS can be solved.

Description

technical field [0001] The invention relates to the technical field of multi-viewpoint video coding, in particular to a method for deriving local disparity vectors in three-dimensional multi-viewpoint video coding. Background technique [0002] In recent years, 3D multi-view video has been favored by people because it can provide richer visual information and more immersive viewing effects. Since 3D multi-view video uses two or more cameras to shoot, the number of viewpoints is greatly increased, and its data volume is also greatly increased compared with traditional 2D video. Therefore, the research on efficient 3D multi-view video compression coding is very important. At present, the international video standardization organization MPEG and ITU-T VCEG jointly formulate the 3D video compression coding standard 3D-HEVC (High Efficiency Video Coding). At the same time, the China Digital Audio and Video Codec Technology Standard Working Group is also developing a 3D multi-vie...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N19/597H04N13/161
CPCH04N13/161H04N19/597
Inventor 马思伟毛琪王苫社苏静罗法蕾
Owner PEKING UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products