Super-resolution model compression and acceleration method based on self-distillation contrast learning

A super-resolution and resolution technology, applied in the field of super-resolution, can solve the problems of increased memory and calculation amount, large amount of super-resolution model parameters and calculation amount, and difficult deployment of super-resolution model, so as to reduce the amount of parameters. and computational complexity, the effect of strong realism

Inactive Publication Date: 2021-10-22
EAST CHINA NORMAL UNIVERSITY
View PDF5 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

More and more scholars have begun to study the construction of a more sophisticated neural network structure to make full use of the information in the image, which has also led to a significant increase in the amount of memory and calculation required for the super-resolution algorithm based on deep learning.
[0004] In the existing technology, there are too many parameters and calculations in the super-resolution model, which has caused certain difficulties in the deployment of the super-resolution model.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Super-resolution model compression and acceleration method based on self-distillation contrast learning
  • Super-resolution model compression and acceleration method based on self-distillation contrast learning
  • Super-resolution model compression and acceleration method based on self-distillation contrast learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0041] See attached figure 1 , in the training stage, the present invention is mainly divided into three parts: data set preprocessing, teacher network (teacher super-resolution model) pre-training and student network (student super-resolution model) training.

[0042] A1: The data sets used are public data sets DIV2K, Set5 and Urban100. DIV2K contains 1000 pictures with a resolution of 2K, 800 of which are selected as the training set, and the remaining 200 constitute the test set. The verification set consists of DIV2K, Set5, and Urban100 composition. All images use bicubic interpolation to generate corresponding low-resolution images.

[0043] A2: During the training process, all training set images are preprocessed to increase the generalization ability of the model, mainly including random cropping of subimages with a size of 192, horizontal and vertical flipping and other techniques.

[0044] See attached figure 2 , the training and loss function calculation process ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a super-resolution model compression and acceleration method based on self-distillation comparative learning, which is characterized in that a loss function based on comparative learning is introduced by adopting a self-distillation framework, a teacher model with a relatively large parameter quantity is trained, the first 1 / r channels of each layer are selected to form a student model, the parameters of the student model are shared with the teacher model; the reconstruction loss and the comparison loss are calculated by using the prediction output of the teacher model, the negative sample and the high-resolution image. Compared with the prior art, the method has plug-and-play performance, the authenticity of the generated image can still be ensured while compression and acceleration are carried out, the parameter quantity and the calculation quantity of the model are greatly reduced, all existing super-resolution models can be compressed and accelerated at the same time, the upper bound and the lower bound of a solution space are restrained by using comparison loss. The performance of the model is ensured while the parameter quantity of the model is reduced, so that the restored picture has higher authenticity.

Description

technical field [0001] The invention relates to the technical field of super-resolution of digital images, in particular to a super-resolution model compression and acceleration method based on self-distillation contrastive learning. Background technique [0002] With the advancement of digital image imaging equipment technology, the number of images acquired through cameras, tablets, mobile phones, surveillance and other equipment is increasing exponentially. Thanks to the rapid development of deep learning, people use these images to perform tasks such as image classification, semantic segmentation, and pedestrian re-identification, which greatly facilitates people's daily life. The performance of these computer vision systems is usually affected by the quality of the acquired images. However, due to the imperfection of imaging systems, atmospheric environments and processing methods, digital images will be lost or damaged to a certain extent during the process of formatio...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T3/40G06K9/62G06T5/00G06N20/10
CPCG06T3/4053G06T3/4007G06N20/10G06F18/214G06T5/00
Inventor 谢源王烟波吴海燕林绍辉张志忠马利庄
Owner EAST CHINA NORMAL UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products