Video space-time super-resolution method and device based on improved deformable convolution correction

A super-resolution and convolution technology, applied in the field of video spatio-temporal super-resolution, can solve the problems of inability to restore the position and detail information of the intermediate frame, difficulty in estimating the motion of the intermediate frame, and difficulty in referring to the motion situation, achieving a high degree of freedom. , Reduce the difficulty of learning, and improve the effect of motion compensation

Active Publication Date: 2021-06-25
ZHEJIANG UNIV
View PDF8 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the traditional convolutional network has seldom studied the method of simultaneously completing video spatio-temporal super-resolution in one stage.
[0003] In the space-time super-resolution problem, the motion estimation of the intermediate frame is particularly difficult when the intermediate frame lacks reference and the adjacent input frame is of low resolution.
Although some newer research attempts to introduce methods such as deformable convolution to improve the inter-frame motion compensation effect, the existing deep learning network is often still unable to restore the position and detail information of the intermediate frame at the same time.
The traditional optical flow method models the motion relationship between the intermediate frame and the input frame and two adjacent input frames with a preset ratio, which lacks adaptability
However, the recently emerging motion correction method based on deformable convolution has a poor compensation effect on video with large motion, and it is difficult to refer to the real motion situation, and there is a problem of insufficient generalization performance.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video space-time super-resolution method and device based on improved deformable convolution correction
  • Video space-time super-resolution method and device based on improved deformable convolution correction
  • Video space-time super-resolution method and device based on improved deformable convolution correction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0029] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, and do not limit the protection scope of the present invention.

[0030] figure 1 It is a flowchart of a video spatio-temporal super-resolution method using a video spatio-temporal super-resolution network provided by an embodiment of the present invention. Such as figure 1 As shown, the video space-time super-resolution method using the video space-time super-resolution network provided by the embodiment includes the following process:

[0031] Prepare the training dataset. The original training images are taken from Vimeo. Select an original high-resolution frame sequence, generate a low-resolution frame sequence with a selected ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a video space-time super-resolution method and device based on improved deformable convolution correction. The method comprises the following steps: constructing a video space-time super-resolution network comprising a feature extraction module, an inter-frame correction module and an image reconstruction module; performing network parameter optimization on the video space-time super-resolution for later use; during application, using the feature extraction module for extracting feature maps from input low-fraction adjacent video frames, and using the inter-frame correction module for carrying out correction processing according to the feature maps corresponding to the adjacent video frames to synthesize an intermediate frame feature map; and performing inter-frame and intra-frame feature extraction on the input intermediate frame feature map and the feature map corresponding to the adjacent video frame by using an image reconstruction module, and reconstructing and outputting a high-resolution and high-frame-rate image sequence. By improving a deformable convolution mode and introducing explicit optical flow estimation, an attention network and other skills, an inter-frame correction network is enabled to be better competent for a video space-time super-resolution task, and a restoration effect is greatly improved.

Description

technical field [0001] The present invention relates to the field of computer science image processing, in particular to a method and device for video space-time super-resolution based on improved deformable convolution correction. Background technique [0002] Video spatiotemporal super-resolution is the combination of video and super-resolution and video frame interpolation, two basic problems in the field of video processing. In recent years, the rapid development of deep learning networks provides an efficient solution for video super-resolution and video frame interpolation algorithms, such as a deep learning-based video super-resolution reconstruction method disclosed in a patent application with publication number CN109102462A. Another example is the patent application with publication number CN104463793A, which discloses a video super-resolution reconstruction method and system based on sparse expression and vector continued fraction interpolation under polar coordin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06T3/40G06N3/04G06N3/08
CPCG06T5/001G06T3/4053G06N3/08G06T2207/10016G06T2207/20081G06T2207/20084G06N3/044
Inventor 蒋荣欣蔡卓骏田翔陈耀武
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products