Video loop filter based on deep convolutional network

A technology of loop filtering and deep convolution, applied in the field of computer vision, to achieve high reconstruction quality, improve accuracy, and reduce the number of bits

Inactive Publication Date: 2019-10-18
TIANJIN UNIV
View PDF5 Cites 20 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The present invention aims at the limitations of existing video loop filters, and proposes a video loop filtering method based on deep convolutional network, using the network model obtained based on deep convolutional network learning to realize the difference between the distorted image and the original image. More accurate non-linear mapping, and replaces the square filter and adaptive pixel compensation in the traditional filter, so as to realize video filtering

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video loop filter based on deep convolutional network
  • Video loop filter based on deep convolutional network
  • Video loop filter based on deep convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0017] The technical solution of the present invention will be described in detail below in conjunction with the accompanying drawings and embodiments.

[0018] Step 1. In view of the different distortion characteristics of intra-frame prediction frames and inter-frame prediction frames, this embodiment selects two different data sets to make training sets for intra-frame prediction frames and inter-frame prediction frame networks respectively.

[0019] For the intra-frame prediction frame, this embodiment selects the UCID (UncompressedColour Image Database) image data set containing 1338 natural images. Each image in the data set uses the HEVC reference software in the full frame (AllIntra), and closes the deblocking filter and adaptive The compressed image is compressed under the configuration of pixel compensation, and the compressed image is divided into 35x35 pixel blocks, and together with the original image, it is made into a training set for the intra-frame prediction f...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video loop filtering method based on a deep convolutional network. The method comprises the following steps: step 1, making a training data set for loop filtering network training; step 2, constructing a network model for video filtering; step 3, taking the training data set obtained in the step 1 as a training set of the network, respectively training two models for intra-frame prediction frame filtering and inter-frame prediction frame filtering, and forming a video filtering network model by the two models; training the video filtering network model by taking a minimized loss function as an optimization target; and step 4, integrating the video filtering network model trained in the step 3 into video encoding software to complete the whole video encoding process and obtain a reconstructed frame after passing through the video filtering network. Compared with a traditional filtering method, the method has the advantages that the image quality of the video reconstruction frame is improved, the accuracy of inter-frame prediction is improved, the coding efficiency is improved, the filtered image frame has higher reconstruction quality, and the video codingefficiency is greatly improved.

Description

technical field [0001] The invention belongs to the fields of computer vision and video coding, and in particular relates to a video loop filter based on a deep convolutional network. Background technique [0002] As the most advanced video coding standard, HEVC (High Efficiency Video Coding) has greatly improved the compression efficiency compared with previous video coding standards. Although HEVC's loop filter significantly improves the quality of reconstructed frames in video coding, HEVC's loop filter technology also has many limitations. For example, compressed video still suffers from unpleasant image artifacts caused by quantization. With the development of the Internet and new media, more and more videos are played and distributed every day. Therefore, there is an urgent need to provide videos with higher quality and smaller size. [0003] All existing coding standards adopt a hybrid coding framework, including processes such as intra / inter prediction, variation,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): H04N19/82H04N19/503H04N19/593H04N19/70
CPCH04N19/82H04N19/503H04N19/593H04N19/70
Inventor 张淑芳范增辉
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products