Compressed video quality enhancement method based on attention mechanism and time dependence

A video quality and time-dependent technology, applied in the direction of digital video signal modification, electrical components, image communication, etc., can solve the problems of inability to obtain, increase the difficulty of network training, etc., achieve the improvement of objective quality evaluation indicators, enhance visual quality, and enhance quality Effect

Active Publication Date: 2020-04-17
FUDAN UNIV
View PDF3 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] However, these methods have their own disadvantages. The first method must use the reference information attached to the decoder, and this information cannot be obtained in most practical application scenarios, which makes this method have certain limitations.
In the second method, it is difficult to train a network to distinguish between good and bad quality based on the subtle objective quality gap between adjacent frames, which will undoubtedly produce many unnecessary errors. There will always be a certain time interval between frames, that is, there will be greater motion between multiple frames as input to the network, which also greatly increases the difficulty of network training

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Compressed video quality enhancement method based on attention mechanism and time dependence
  • Compressed video quality enhancement method based on attention mechanism and time dependence
  • Compressed video quality enhancement method based on attention mechanism and time dependence

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0037] The embodiments of the present invention will be described in detail below, but the protection scope of the present invention is not limited to the examples.

[0038] use figure 1The network structure in , was trained with 63 video sequences with resolutions ranging from 176x144 to 1920x1080.

[0039] The specific process is as follows:

[0040] (1) During training, use 5 consecutive frames as the input of the network, select 13 sets of inputs as a batch, and each frame is cut into 64x64 patches for easy training; since each frame to be enhanced needs to use The two previous frames and the last two frames, so for the first two frames and the last two frames in each video, a copy of that frame will be used to replace the missing frame;

[0041] (2) When testing, use 16 video sequences different from the training set as the test set. When testing the objective quality of each video, first calculate the PSNR value between each frame in the video and the uncompressed orig...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention belongs to the technical field of digital video processing, and particularly relates to a compressed video quality enhancement method based on an attention mechanism and time dependence.The method comprises the steps of constructing an FAM module based on an attention mechanism; constructing an inter-frame long-term time dependency relationship guided LDE feature fusion module; constructing a fine RSDE feature fusion module guided by an inter-frame short-term time dependence relationship; obtaining feature information endowed with different attention by using an FAM module according to contributions of a plurality of input continuous frames; extracting long-term time dependence information from the features between the adjacent frames by using an LDE module to obtain an intermediate result and feature information; and finally, selectively extracting a short-term time dependency relationship from the previous enhanced frame by combining the FAM module and the RSDE moduleto generate a final enhancement result. Experiments show that the visual quality can be enhanced on a test set containing various real scenes, and the objective quality evaluation index is greatly improved.

Description

technical field [0001] The invention belongs to the technical field of digital video intelligent processing, and in particular relates to a method for enhancing video quality. Background technique [0002] With the continuous development of computer technology and network communication technology, a large amount of video information floods into the Internet, which brings great challenges to the current storage and transmission technology. Therefore, various video compression technologies emerge as the times require, such as MPEG (Moving Picture Experts Group), HEVC (High Efficiency Video Coding) [2] standards, etc. However, these video information are compressed lossily to achieve a higher compression rate, so that the decompressed video will always lose some important high-frequency information, and produce some artifacts such as blockiness and ringing, which lead to its Visual quality is severely degraded. [0003] Compressed video quality enhancement method (Quality enh...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/154H04N19/42
CPCH04N19/42H04N19/154
Inventor 颜波容文迅
Owner FUDAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products