Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video compression artifact adaptive removing method based on depth learning

A video compression and deep learning technology, applied in the field of video processing, can solve the problems of increased coding complexity, lack of adaptive ability, weak robustness, etc., to enhance nonlinear expression ability, alleviate the problem of gradient disappearance, strengthen communication and The effect of multiplexing

Active Publication Date: 2019-01-22
福建帝视信息科技有限公司
View PDF6 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

While alleviating video compression artifacts, these two built-in filters also increase the complexity of encoding and affect the real-time performance of the encoding algorithm.
[0005] In general, these traditional video compression artifact removal methods have the following problems: First, filters need to be manually designed, and such filters are usually only for a certain type of artifact, and the versatility is poor
Second, the threshold of the filter needs to be set according to experience. The setting of the threshold usually has a greater impact on the filtering results, and the robustness is weak.
Third, the use of embedded filters to alleviate video compression artifacts increases coding complexity and affects the real-time performance of coding algorithms
Fourth, traditional algorithms usually seldom use the effective information generated by the encoding process, it is difficult to automatically adjust the filter strength, and the adaptive ability is weak
However, the current out-of-loop filtering method based on deep learning technology still lacks the ability of self-adaptation
In other words, a single convolutional neural network model cannot handle video artifacts of various intensities well
The transcoding of Internet video usually adopts a constant bit rate (CBR) method, which will lead to various compression artifacts of different strengths in the same video sequence

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video compression artifact adaptive removing method based on depth learning
  • Video compression artifact adaptive removing method based on depth learning
  • Video compression artifact adaptive removing method based on depth learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0035] like Figure 1-5 In order to make researchers in the technical field better understand the technical solution applied for by the present invention. Next, the technical solutions in the application embodiments of the present invention will be described more completely with reference to the accompanying drawings in the application embodiments. The described embodiments are only some, but not all, embodiments of the present application. On the basis of the embodiments described in this application, other embodiments obtained by those skilled in the art without creative work shall fall within the protection scope of this application.

[0036] attached by figure 1 It can be seen that the present invention requires a total of two implementation stages, namely the image quality prediction stage and the artifact removal stage. The invention discloses a deep learning-based video compression artifact adaptive removal method, which comprises the following steps:

[0037] Step ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video compression artifact adaptive removing method based on depth learning, which adopts a depth-dense connected convolution network to automatically extract the compressioncharacteristics of a video frame, and can effectively avoid the shortcomings caused by the manual design of a filter in the traditional method. The invention acts on the post-processing stage of thevideo, and does not affect the processing flow and the real-time performance of the existing video coding and decoding algorithm. A new image quality prediction model is proposed to realize automaticselection of compression artifacts with different intensities, which has strong adaptive ability. Deep-connected convolution network is used to remove video compression artifacts, which can effectively alleviate the problem of gradient disappearance, deepen the network structure and enhance the network non-linear expression ability. At the same time, the network can also make full use of the characteristics of the middle layer, which not only enhances the propagation and multiplexing of features, but also greatly reduces the network parameters.

Description

technical field [0001] The invention relates to the field of video processing and deep learning technology, in particular to a deep learning-based adaptive removal method for video compression artifacts. Background technique [0002] Video compression artifact removal is a technique used to improve video quality. Among them, the compression artifacts of the video are produced by the encoding method of the video. [0003] With the rapid growth of Internet video data, in order to control the cost of video storage and transmission, a higher compression rate is usually used in video encoding. Generally speaking, lossy compression algorithms, such as the common MPEG and H.26X series, are used in the video encoding process. Among them, H.264 is the most widely used video coding method. While reducing the video volume, these encoding methods introduce video artifacts such as blockiness, ringing, flickering, and mosquito noise due to compression. These artifacts can severely deg...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N19/117H04N19/86
CPCH04N19/117H04N19/86
Inventor 苏建楠林宇辉黄伟萍李根童同高钦泉
Owner 福建帝视信息科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products