Spatial scalable fast coding method

A fast coding and spatial technology, applied in the field of spatially scalable fast coding, to achieve the effect of shortening coding time, reducing computational complexity, and improving coding real-time performance

Active Publication Date: 2018-06-29
CHONGQING UNIV OF POSTS & TELECOMM
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Methods to reduce the complexity of SHVC coding have appeared, which have reduced the computational complexity of coding to a certain extent and improved real-time performance, but there is still a lot of room for improvement in the computational complexity and real-time performance of coding.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Spatial scalable fast coding method
  • Spatial scalable fast coding method
  • Spatial scalable fast coding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0046] The video sequence of the enhancement layer is down-sampled according to the ratio of 2:1, and the base layer is encoded to obtain the video sequence of the base layer;

[0047] Because the video sequence of the base layer of spatial scalable video coding technology is obtained by downsampling the enhancement layer sequence according to the ratio of 2:1, so according to image 3 It can be seen that the size of a coding tree unit (Coding TreeUnit, CU) in the enhancement layer is 64×64, and the size of the corresponding coding tree unit in the base layer is 32×32.

[0048] The coding unit (CodingUnit, CU) is split from the coding tree unit, and the size of the coding unit cannot exceed the coding tree unit at most. Moreover, the present invention is aimed at the intra-frame prediction of the spatially scalable video coding technology, and there are only two ways to divide the coding unit, that is, 2N×2N and N×N.

[0049] Further, the spatially scalable coding method adop...

Embodiment 2

[0073] like Image 6 As shown in the figure, it can be seen that the process of depth prediction of the current coding unit of the enhancement layer according to the adjacent coding units is a flowchart of a preferred embodiment of the present invention.

[0074] Another achievable method for depth prediction of the current coding unit of the enhancement layer is:

[0075] In the first step, the original data can be generated according to the international standard technology, and the depth of the coding unit of the basic layer in the original data is calculated mathematically, and the ratio of the depth to 3 is calculated, and the video sequence is classified;

[0076] As an alternative, when the ratio of depth to 3 is greater than the upper threshold, the video sequence is the first sequence (sequence with complex background), and when the ratio of depth to 3 is smaller than the lower threshold, the video sequence is the second sequence (background simple sequence), otherwi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the technical field of scalable video coding, in particular to a spatial scalable fast coding method. The method comprises the following steps of dividing a video sequence into three categories according to a distribution condition in a basic layer; when the coding of an enhancement layer is carried out, computing correlation degree of a current coding unit of the enhancement layer and an adjacent coding unit of the enhancement layer; setting a weight representing the correlation degree for the adjacent coding unit; computing a probability corresponding to each depth of the current coding unit according to the depth and the weight of the adjacent coding unit, and arranging from large to small according to the probability; and predicting the depth of the current coding unit of the enhancement layer according to the category of the video sequence, and removing the depth with a small probability. According to the method, the unnecessary traversing process of the enhancement layer is reduced, and the coding complexity can be reduced.

Description

technical field [0001] The invention relates to the field of space-scalable video coding, and specifically relates to a space-scalable fast coding method. Background technique [0002] Now, high-definition, ultra-clear, and Blu-ray videos are more and more widely used in daily life. The rapid development of streaming media, network heterogeneity, and differences in user needs all promote the development of video coding technology. However, the traditional video coding technology can no longer meet many demands, and scalable video coding has emerged as the times require. Scalable High-efficiency Video Coding (SHVC) based on H.265 / HEVC is the latest standard technology at present. SHVC is divided into three categories: spatial scalability, temporal scalability, and quality scalability. Compared with H.264, High Efficiency Video Coding (HEVC) adds a lot of advanced coding technologies to make it have higher compression rate and video image quality, but its coding complexity al...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N19/34H04N19/96
CPCH04N19/34H04N19/96
Inventor 赵志强崔盈刘妍君汪大勇冉鹏王俊李章勇王伟
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products