Learning-based high efficiency video coding method

A high-efficiency video coding and coding technology, applied in the field of high-efficiency video coding based on learning, can solve problems such as compression efficiency reduction, difficult coding efficiency, computational complexity, and difficulty in adapting to coding requirements of different video systems

Active Publication Date: 2016-11-23
SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
View PDF4 Cites 37 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In general, the existing coding unit mode prediction methods based on machine learning are very dependent on feature selection and the classification accuracy of the learning machine. Once the prediction is inaccurate, it will lead to a huge drop in compression efficiency.
At the same time, once the traditional method is determined, it is difficult to achieve the conversion of coding efficiency and computational complexity through parameter adjustment, so it is difficult to apply to the coding requirements of different video systems

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Learning-based high efficiency video coding method
  • Learning-based high efficiency video coding method
  • Learning-based high efficiency video coding method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0067] Such as figure 1 Shown is a flow chart of a learning-based high-efficiency video coding method.

[0068] A high-efficiency video coding method based on learning, comprising the following steps:

[0069] Step 110: Use a high-efficiency video encoder to encode a video sequence, and extract a feature vector corresponding to each coding unit block.

[0070] The feature vector includes features of the current coding unit block, motion information, context information, quantization parameters, etc., and an optimal coding unit size.

[0071] The characteristics of the current coding unit block include the coding block identification bit x CBF_Meg (i), rate-distortion cost value x RD_Meg (i), distortion x D_Meg (i) and the number of encoded bits x Bit_Meg (i); wherein, i is the depth of the current coding unit.

[0072] The calculation formula of motion information is x MV_Meg (i)=|MVx|+|MVy|, where MVx and MVy represent motion and vertical motion magnitudes respectively...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a learning-based high efficiency video coding method. The method comprises the following steps: coding a video sequence by a high efficiency video coder, and extracting feature vectors corresponding to coding unit blocks; inputting the extracted feature vectors and an optimal coding unit size into a three-value-output learning machine, and building a learning model; adding an early-abort strategy structure into a selection process of coding unit sizes in the high efficiency video coder, executing a skip mode current block and a merge mode current block firstly, and extracting feature vectors corresponding to corresponding current coding; inputting the feature vectors into a learned learning machine model, outputting a prediction value, and executing the current coding unit size according to the corresponding early-abort strategy structure till all coding unit layers in coding tree units are coded; and performing repeated execution till the coding tree units in all video frames are coded. By adoption of the method, an optimal coding process can be output correspondingly according to a rate-distortion cost and calculation complexity; the learning performance and classifying performance of a classifier are improved; and the coding efficiency of video coding is increased.

Description

technical field [0001] The invention relates to an image signal processing method, in particular to an efficient learning-based high-efficiency video coding method. Background technique [0002] High Definition (High Definition, HD) and Ultra High Definition (Ultra High Definition) videos are becoming more and more popular and loved by people because they can provide better perceptual quality and more realistic visual experience. These high-definition and ultra-high-definition videos have a broad application market, including high-definition TV broadcasting, MAX movies, immersive video communications, network video on demand, and high-definition video surveillance. However, due to the higher definition and video frame rate of HD and UHD video, the amount of video data has also increased tremendously. For example, an 8K×4K@120 frame per second HD / UHD video has 11.5GB per second of raw video data, and its effective storage and transmission requires very efficient video compre...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/103H04N19/147
Inventor 张云朱林卫
Owner SHENZHEN INST OF ADVANCED TECH CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products