Method for improving video compression coding efficiency based on deep learning

A technology of video compression and coding efficiency, applied in the field of multimedia video coding, to achieve the effect of improving effect, high degree of innovation, and improving coding efficiency

Active Publication Date: 2018-03-20
HANGZHOU DIANZI UNIV
View PDF4 Cites 10 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

How to apply the cutting-edge machine learning method of deep learning to the field of video compressi

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for improving video compression coding efficiency based on deep learning
  • Method for improving video compression coding efficiency based on deep learning
  • Method for improving video compression coding efficiency based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0050] The present invention will be described in detail below in combination with specific embodiments.

[0051] Such as figure 1 As shown, a method for improving the efficiency of video compression coding based on deep learning is performed according to the following steps:

[0052] Step 1. Taking foreman and flowers video sequences as examples, obtain the peak signal-to-noise ratio PSNR1 between the picture obtained after the original inter-frame prediction (the most basic motion estimation and motion compensation) and the real picture. The specific method for obtaining this PSNR1 is as follows:

[0053] a. Block-based motion estimation:

[0054] Motion estimation refers to a set of technologies for extracting motion information from video sequences. The main content of the research is how to quickly and effectively obtain sufficient motion vectors. The specific method is that for a block in the previous frame of the foreman video sequence (frame i, denoted as im_src) in ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a method for improving video compression coding efficiency based on deep learning. The method comprises the following specific steps of S1, obtaining a peak signal to noise ratio PSNR1 of original video inter-frame prediction composed of basic motion estimation and motion compensation; S2, importing an SRCNN model to train inter-frame pictures, thereby obtaining a weight matrix and a deviation matrix, modifying parameters of the SRCNN model, and adjusting a network to obtain optimal training parameters; S3, testing test pictures through utilization of the trained model,thereby obtaining a result PSNR2 tested by the SRCNN model, and comparing the PSNR1 with the PSNR2, thereby obtaining the feasibility of applying the SRCNN to inter-frame prediction coding; and S4, applying the SRCNN model to official code HM16.0 provided by the newest coding standard HEVC. According to the method, through application of the deep learning to the field of inter-frame coding, the coding efficiency of an inter-frame motion acute block can be improved.

Description

technical field [0001] The invention belongs to the field of multimedia video coding, is aimed at the latest video coding standards, and specifically relates to a method for improving video compression coding efficiency based on deep learning. Background technique [0002] With the development of network communication technology, the demand for watching videos on the Internet on computers and mobile terminals is increasing, and people's requirements for video quality are also increasing, which makes video compression technology continue to develop. For video coding, the International Organization for Standardization and other organizations have formulated a series of video communication standards, including H.261, H.262, H.263, H.264, MPEG-1, MPEG-2, MPEG-3, MPEG-4 , AVS, etc. Nowadays, the latest video coding standard is the high-efficiency video coding standard HEVC, which is H.265 in the traditional sense. Under this condition, the video coding efficiency is increased b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): H04N19/159H04N19/172H04N19/51H04N19/52H04N19/70G06T7/223
CPCG06T2207/10016G06T2207/20081G06T2207/20084G06T7/223H04N19/159H04N19/172H04N19/51H04N19/52H04N19/70
Inventor 李志胜颜成钢张永兵张腾赵崇宇
Owner HANGZHOU DIANZI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products