Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Bit allocation and rate control method for deep video coding

A technology of depth video and bit allocation, which is applied in the field of 3D video coding, can solve the problems of combining the characteristics of no depth video region with bit rate control, etc., and achieve the effect of improving quality, meeting application requirements, and improving coding quality.

Inactive Publication Date: 2016-08-24
TIANJIN UNIV
View PDF3 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] The present invention provides a bit allocation and code rate control method for depth video coding. The present invention aims at the problem that the existing methods do not combine the regional characteristics of depth video with code rate control, and studies the bit allocation and code rate control of depth video coding based on regions. The rate control method improves the accuracy of the depth video encoding target bit rate and the quality of drawing virtual viewpoints. See the description below for details:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Bit allocation and rate control method for deep video coding
  • Bit allocation and rate control method for deep video coding
  • Bit allocation and rate control method for deep video coding

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] 101: Establishing a model relationship between depth video distortion and bit rate, and a model relationship between quantization parameters and bit rate based on the texture area and the smooth area;

[0032] 102: Establish a virtual view point distortion model through the model relationship between depth video distortion and bit rate; calculate the optimal target bit rate of the depth video texture area and smooth area according to the virtual view point distortion model;

[0033] 103: Bring the optimal target bit rate into the model relationship between the quantization parameter and the code rate, and obtain the optimal quantization parameter for the smooth area and the optimal quantization parameter for the texture area.

[0034]Wherein, the texture area in step 101 is: the depth video is divided into regions, and the region is composed of depth boundaries and minimum coding units;

[0035] The texture smoothing area in step 101 is: the area other than the texture ...

Embodiment 2

[0041] The scheme in embodiment 1 is described in detail below in combination with specific calculation formulas and scheme principles, see the description below for details:

[0042] 201: Perform region division on the depth video;

[0043] Among them, the depth video is divided into regions, and the depth boundary and the smallest coding unit (Coding Unit, CU) are marked as texture areas (Texture Area, TA), and the remaining areas are marked as smooth areas (Smooth Area, SA). The depth video boundary is extracted by the Canny operator; among them, the smallest CU is divided according to the coded coding tree unit (Coding Tree Unit, CTU) at the same position in the same temporal layer picture to mark the smallest CU of the current CTU. Usually, the CTU size is 64*64, and the minimum CU size is 8*8. The selection of the minimum coding unit is well known to those skilled in the art, and this embodiment of the present invention will not describe it in detail.

[0044] 202: Esta...

Embodiment 3

[0086] The following is combined with specific figure 2 and 3 , and experimental data carry out feasibility verification to the scheme in embodiment 1 and 2, see the following description for details:

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

PropertyMeasurementUnit
Frame rateaaaaaaaaaa
Login to View More

Abstract

The invention discloses a bit allocation and rate control method for deep video coding. The method comprises the following steps: respectively establishing a model relation between deep video distortion and rates and a model relation between quantization parameters and rates based on a texture region and a smooth region; establishing a virtual viewpoint distortion model via the model relation between deep video distortion and rates; calculating optimal target bit rates of the texture region and the smooth region of deep video according to the virtual viewpoint distortion model; and bringing the optimal target bit rates to the model relation between quantization parameters and rates, to obtain an optimal quantization parameter of the smooth region and an optimal quantization parameter of the texture region. According to the method, the regional characteristics of the deep video are combined to a rate control algorithm, so that the accuracy of bit allocation of deep video coding is improved, meanwhile, the coding quality of the texture region of the deep video is improved, then the quality of a drawn virtual view is improved, and the application requirements of a 3D video system are met.

Description

technical field [0001] The invention relates to the field of 3D video coding, in particular to a method for bit allocation and code rate control of depth video coding. Background technique [0002] With the development of 3D display technology and depth-based virtual viewpoint rendering technology, the important role of depth video in 3D video is gradually being discovered. In 3D video, depth video is the geometric representation of objects in a 3D scene. Unlike color images, depth maps consist of a large number of smooth regions and sharp boundaries. Applying traditional color video coding algorithms directly to deep video coding usually cannot obtain optimal coding results. Based on the unique properties of depth maps, deep video coding can achieve higher compression efficiency than color video coding. In order to better preserve the depth edge and obtain higher compression efficiency, many deep video compression techniques have been proposed, including: boundary recons...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/597H04N19/146H04N19/147H04N13/00
CPCH04N19/597H04N13/161H04N19/146H04N19/147
Inventor 雷建军贺小旭侯春萍李贞贞李东阳
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products