Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video super-resolution method based on time attention and cyclic feedback network

A super-resolution and feedback network technology, applied in the field of video processing, can solve the problems of different contributions of visual information to the super-resolution reconstruction effect, and the feedback mechanism has not been fully utilized, so as to improve the video super-resolution effect and the detail reconstruction effect. Effect

Pending Publication Date: 2021-11-05
GUANGDONG UNIV OF TECH
View PDF2 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the existing methods, (1) in terms of timing information utilization, the characteristics of different contributions of visual information provided by adjacent frames with different distances from the target frame to the super-resolution reconstruction effect have not been fully utilized; (2) human vision has not been fully utilized Feedback mechanisms common in systems, and the cyclical feedback-guided nature of human learning of new knowledge

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video super-resolution method based on time attention and cyclic feedback network
  • Video super-resolution method based on time attention and cyclic feedback network
  • Video super-resolution method based on time attention and cyclic feedback network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0064] Such as figure 1 As shown, a video super-resolution method based on temporal attention and loop feedback network, including the following steps:

[0065] S1: Build a super-resolution network model, including a temporal attention module and a loop feedback module;

[0066] S2: Obtain a public video super-resolution training data set from the network and preprocess the data set to obtain a low-resolution (LR) video sequence for training;

[0067] In this embodiment, the videos in the existing public high-resolution dataset Vimeo-90k dataset are selected as the training video data, and the video data are preprocessed.

[0068] S3: Determine the target frame that needs to be super-resolution, upsample it, and obtain the preliminary super-resolution result that the target frame lacks details;

[0069] In this embodiment, the training video data is 5 frames, and the middle frame is selected as the target frame to be super-resolution, and the bicubic interpolation and upsamp...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a video super-resolution method based on time attention and a cyclic feedback network. The characteristic that the contribution degrees of visual information provided by adjacent frames with different distances from a target frame to a super-resolution reconstruction effect are different, a feedback mechanism of a human visual system and a cyclic feedback guidance characteristic in a human new knowledge learning process are applied to a video super-resolution technology; a time attention module is adopted to learn an attention map of a video sequence on a time axis, so that the contribution of adjacent frames with different time degrees to a final reconstruction effect can be effectively distinguished; a video sequence is rearranged and then is subjected to cyclic feedback super-division by a cyclic feedback module, and finally a super-resolution network model is obtained, and the model has the characteristic of emphatically learning information with high contribution to super-division reconstruction and strong high-level feature learning ability, so that the video super-resolution effect is improved.

Description

technical field [0001] The invention relates to the technical field of video processing, in particular to a video super-resolution method based on temporal attention and loop feedback network. Background technique [0002] Video super-resolution methods, which generate high-resolution videos from low-resolution videos, have been extensively studied for decades as a typical computer vision problem. Not only is it important in theory, but it is also urgently needed in practical applications. For example, in terms of video surveillance, banks, stations, airports, residential areas, etc. will have multiple surveillance cameras. Through video super-resolution technology, the video quality can be improved, and it is convenient to observe the detailed information of people and objects; in terms of traffic management, due to The scene observed by the camera is large, and it is impossible to obtain the detailed information of high-speed vehicles and passing pedestrians. Using the mu...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T3/40G06K9/46G06K9/62G06N3/04G06N3/08
CPCG06T3/4053G06N3/08G06T2207/10016G06N3/045G06F18/214
Inventor 张庆武朱鉴蔡金峰陈炳丰蔡瑞初郝志峰
Owner GUANGDONG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products