Video super-resolution reconstruction method based on multi-frame fusion optical flow

A super-resolution reconstruction and high-resolution technology, applied in the field of video super-resolution reconstruction based on multi-frame fusion optical flow and spatio-temporal residual dense blocks, can solve the problem of increasing computing costs, limiting performance, and ignoring the temporal correlation of video frames and other issues to achieve good results

Active Publication Date: 2020-06-19
SHAANXI NORMAL UNIV
View PDF5 Cites 39 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Liu et al. designed a time-adaptive neural network to adaptively learn the optimal scale of time-dependence, but currently only designed a simple three-layer convolutional neural network structure, which limits the performance
[0006] There are still some problems in the current video super-resolution method: the images in the single video frame super-resolution method are independent, and each image in the video frame is processed separately , and finally synthesize the entire video, these methods ignore the temporal correlation between video frames and lose a lot of details
Although the multi-video frame super-resolution method considers the time correlation between video frames, these method models increase a lot of computational costs, which limits the development of video super-resolution to a certain extent.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution reconstruction method based on multi-frame fusion optical flow
  • Video super-resolution reconstruction method based on multi-frame fusion optical flow
  • Video super-resolution reconstruction method based on multi-frame fusion optical flow

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0031] Taking 30 random scenes in the CDVL data set as an example as a high-resolution data set, the video super-resolution reconstruction method based on multi-frame fusion optical flow of the present embodiment consists of the following steps (see figure 1 ):

[0032] (1) Dataset preprocessing

[0033] In the 30 scenes of the high-resolution dataset, 20 frames are reserved for each scene, and the RGB space of each frame is converted to the Y space according to the following formula to obtain a single-channel high-resolution video frame.

[0034] Y=0.257R+0.504G+0.098B+16

[0035] Among them, R, G, and B are three channels.

[0036] A high-resolution video frame with a length of 540 and a width of 960 is intercepted from the same position in the high-resolution video frame as a learning target, and the downsampling method is used to reduce it by 4 times to obtain a low-resolution video frame with a length of 135 and a width of 240 , is the input to the network and normaliz...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video super-resolution reconstruction method based on multi-frame fusion optical flow. The video super-resolution reconstruction method comprises the steps of collecting a data set, constructing a motion compensation network and reconstructing the network in a super-resolution manner. In a multi-frame fusion optical flow network, for multiple input frames, intra-frame spatial correlation can be fully utilized, loss details can be compensated, fused optical flow is used for motion compensation, and compensation frames are similar to a learning target. In a super-resolution reconstruction network, a three-dimensional scale feature extraction layer and a space-time residual module are used to extract image features of a compensation frame, and sub-pixel convolution is used to obtain a high-resolution video frame. And end-to-end training is carried out on the multi-frame fusion optical flow network and the video super-resolution reconstruction network at the sametime. Space-time information between acquired video frames can express features of video frame information fusion, and high-resolution video frames with good effects are reconstructed. The method canbe applied to the technical fields of satellite images, video monitoring, medical imaging, military science and technology and the like.

Description

technical field [0001] The invention relates to the technical field of video super-resolution, in particular to a video super-resolution reconstruction method based on multi-frame fusion optical flow and spatiotemporal residual dense blocks. Background technique [0002] Video super-resolution methods, which generate high-resolution videos from low-resolution videos, have been extensively studied for decades as a typical computer vision problem. In recent years, with the emergence of a large number of high-definition display devices and the emergence of ultra-high-definition resolution, the development of video super-resolution has been further promoted. At the same time, it also has wide application prospects in satellite images, video surveillance, medical imaging, and military technology, and has become one of the hot research issues in the field of computer vision. [0003] Traditional super-resolution methods are based on interpolation methods, such as nearest neighbor...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06T5/50G06N3/04
CPCG06T5/50G06T3/4053G06N3/045
Inventor 郭敏方榕桢吕琼帅
Owner SHAANXI NORMAL UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products