Supercharge Your Innovation With Domain-Expert AI Agents!

Depth video fast intraframe coding method

An intra-frame coding and depth video technology, applied in the field of video coding, can solve the problems of increasing coding complexity, not considering depth video texture, lack of research on depth layer distribution characteristics, etc., to reduce computational complexity and save mode rough selection. Effect of time and PU mode traversal time

Active Publication Date: 2018-10-26
TIANJIN UNIV
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] 3D-HEVC is an extension based on HEVC. It continues to use the coding structure of quadtree division. In addition to 35 traditional prediction modes in intra prediction, it introduces coding work such as depth modeling mode for depth video. While improving the compression efficiency, it increases the encoding complexity
[0006] The methods in the prior art are often based on the blind traversal process, without considering the connection between the depth video texture characteristics and the intra prediction mode and the prediction unit (Prediction Unit, PU) mode; the existing methods are based on the quadtree division structure, It is necessary to traverse the 0-3 depth layers in sequence, and there is a lack of research on the distribution characteristics of the depth layers

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Depth video fast intraframe coding method
  • Depth video fast intraframe coding method
  • Depth video fast intraframe coding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0045] In order to overcome the shortcomings of the existing technology, the embodiment of the present invention proposes a fast intra-frame encoding method for depth video based on content characteristics, which reduces the encoding time without significantly reducing the video quality. The specific technical solutions are mainly divided into the following Several steps:

[0046] 101: Divide each frame of video image into coding tree units, and each coding tree unit is further decomposed into several square coding units according to the quadtree structure; each coding unit is further divided into one or more prediction units;

[0047]102: Based on the texture characteristics of the prediction unit, combined with the Hadamard transform cost and variance, construct a fast decision-making formula to screen the intra-frame mode of the prediction unit in advance, and directly add the DC mode and planar mode to the full RD cost calculation list if the conditions are met as a candid...

Embodiment 2

[0060] The scheme in embodiment 1 is further introduced below in conjunction with specific examples and calculation formulas, see the following description for details:

[0061] Taking the video sequence Kendo as an example, the specific implementation process of the algorithm is illustrated by encoding it. The order of the input video sequence is as follows: color viewpoint 3, depth viewpoint 3, color viewpoint 1, depth viewpoint 1, color viewpoint 5, depth viewpoint 5, wherein the color viewpoint is encoded by the original 3D-HEVC encoding method, and the depth viewpoint is implemented by the present invention The method proposed by the example is coded.

[0062] 1. Coding tree unit division

[0063] HEVC adopts a block-based encoding method, and the size of a block can be adaptively changed by division. When the encoder processes a frame of image, the image is first divided into coding tree units (CodingTree Unit, CTU) with a size of 64×64 pixels. Each coding tree unit c...

Embodiment 3

[0107] Below in conjunction with concrete experimental data, the scheme in embodiment 1 and 2 is carried out feasibility verification, see the following description for details:

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a depth video fast intraframe coding method. The method includes the following steps: further decomposing each of coding tree units into a plurality of square coding units according to a quadtree structure; dividing each of the coding units into one or more prediction units; screening intra modes of the prediction units in advance based on texture properties of the prediction units and combing a Hadamard transformation cost and variance construction fast decision formula, and the DC and directly adding a direct current mode and a planar mode to a full RD cost calculation list as candidate modes if a condition is satisfied; performing rate distortion optimization on the full RD cost calculation list, and selecting a best PU mode for current PU in advance according toa neighborhood coded PU mode and a CBF flag bit; determining whether current coding units are re-divided according to the neighborhood CTU coding depth and combining the CBF flag bit; and measuring distortion by utilizing distortion of weighted average rendering views and the distortion of depth maps and by employing a view synthesis optimization algorithm, and performing the rate-distortion optimization on depth video coding.

Description

technical field [0001] The invention relates to the field of video coding, in particular to a fast intra-frame coding method for depth video. Background technique [0002] In recent years, with the rapid development of multimedia technology and the continuous improvement of user needs, 3D video technology has gained great attention. The amount of information contained in 3D video is far more than that of 2D video, which puts forward higher requirements for video coding technology. According to different video expression formats, 3D video coding methods can be divided into two categories: one is based on the multiview video (Multiview Video, MVV) format, and the other is based on the multiview video plus depth (Multiview Video plusDepth, MVD) format. The MVD video format reduces the number of color videos and introduces depth videos corresponding to color videos. Depth Image Based Rendering (DIBR) can be used to flexibly draw virtual viewpoints, greatly improving transmissio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/147H04N19/176H04N19/96H04N19/597H04N19/593
CPCH04N19/147H04N19/176H04N19/593H04N19/597H04N19/96
Inventor 雷建军张凯明孙振燕彭勃丛润民张曼华徐遥令
Owner TIANJIN UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More