Video super-resolution recovery method based on deep learning and adjacent frames

A super-resolution and deep learning technology, applied in the field of computer image processing, which can solve problems such as single features, no information learned, and no consideration of adjacent frame pictures.

Active Publication Date: 2021-02-12
SHANDONG UNIV
View PDF5 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the patented technical solution uses machine learning, and the learned features are too single and not rich enough; the TV2++ regularization item constructed only mixes the information of the reference frame and the adjacent frames, and does not extract feature maps for them separately, and does not learn to their own information; it does not take into account the local movement of the adjacent frame pictures, so just fusing the adjacent frames together cannot make the adjacent frames and the reference frame align the objects in the picture. The existence of this deviation makes the restoration The effect is not ideal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution recovery method based on deep learning and adjacent frames
  • Video super-resolution recovery method based on deep learning and adjacent frames
  • Video super-resolution recovery method based on deep learning and adjacent frames

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0053] A video super-resolution restoration method based on deep learning and adjacent frames, comprising the following steps:

[0054] (1) Data preprocessing:

[0055] Preprocess the data set and divide the data set into training set and test set;

[0056] The data set is preprocessed, and the selected data set is the data set REDS for video super-resolution. The data set is divided into training set and test set; the training set includes 266 videos, and the test set includes 4 videos. Each of these videos has 100 frames. The low resolution and high resolution image resolutions are 320*180*3 and 1280*720*3 respectively. Among them, 320*180 and 1280*720 indicate the size of the image, and 3 indicates that the image is three channels.

[0057] (2) Data enhancement:

[0058] Crop the image into a small picture with a size of 64*64 to facilitate training. Randomly flip and rotate the image (0°, 90°, 180°, 270°) to increase the data.

[0059] (3) Data conversion:

[0060]...

Embodiment 2

[0066] According to a kind of video super-resolution recovery method based on deep learning and adjacent frames described in embodiment 1, its difference is:

[0067] The frame alignment module adopts full convolution, the convolution method is ordinary convolution and deformable convolution, and a pyramid cascade structure is used; the pyramid cascade structure includes three layers L1 layer, L2 layer, and L3 layer, which are processed by step (2) The feature map of L1 obtained by the ordinary convolution of the obtained low-scoring small picture, the feature map of L2 is obtained by down-sampling and convolution of the feature map of L1, and the feature map of L3 is obtained by down-sampling and convolution of the feature map of L2; the specific structure Such as figure 2 . That is, the frame alignment feature map output by the frame alignment module.

[0068] The reference frame, that is, the t-th frame image, and each of its adjacent frames, that is, the t+i-th frame im...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a video super-resolution recovery method based on deep learning and adjacent frames. The method comprises the following steps: (1) data preprocessing: (2) data enhancement: (3) data conversion: (4) network architecture construction: a network comprises a frame alignment module, a frame fusion module and a reconstruction module; (5) inputting each current video image frameneeding to be subjected to super-resolution recovery and n frames (2n + 1 frames in total) of images before and after the current video image frame into the network architecture constructed in the step (4); and outputting the recovered super-resolution video. According to the invention, the obtained image has a better effect.

Description

technical field [0001] The invention relates to a video super-resolution restoration method based on deep learning and adjacent frames, and belongs to the technical field of computer image processing. Background technique [0002] Super-resolution is a very important research problem in the field of computer vision and image processing, and has a wide range of applications in practical scenarios such as medical image analysis, biometric recognition, video surveillance and security. In actual work and life, due to equipment limitations and other reasons, the acquired video may have low resolution, causing many problems. In view of this situation, we need to perform super-resolution restoration on this kind of video to get higher quality video. With the development of deep learning technology, the super-scoring method based on deep learning has achieved the best performance and effect on multiple test tasks. [0003] In the field of video super-resolution restoration, existi...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06N3/04
CPCG06T3/4046G06T3/4053G06N3/045G06N3/044
Inventor 杜晓炜周洪超段恩悦周斌
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products