Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Neural network video deblurring method based on multi-attention mechanism fusion

A neural network and deblurring technology, applied in biological neural network models, neural learning methods, neural architectures, etc., can solve problems such as single information, video temporal discontinuity, and unreal video.

Active Publication Date: 2020-08-14
WENZHOU UNIVERSITY
View PDF4 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In addition, the information collected with the help of a simple CNN model is relatively simple and cannot effectively model the temporal and spatial information, resulting in a certain temporal discontinuity in the restored video and the restored video is unreal

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Neural network video deblurring method based on multi-attention mechanism fusion
  • Neural network video deblurring method based on multi-attention mechanism fusion
  • Neural network video deblurring method based on multi-attention mechanism fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0028] see Figure 1 to Figure 5 , a neural network video deblurring method based on multi-attention mechanism fusion disclosed by the present invention, comprising the following steps:

[0029] S1. Construct a video deblurring model; wherein, the deblurring model includes a spatiotemporal attention module, a channel attention module, a feature deblurring module and an image reconstruction module;

[0030] The specific process is as follows: figure 2 As shown, a video deblurring model is constructed; the video deblurring model includes a spatiotemporal attention module (such as image 3 shown), the channel attention module (such as Figure 4 shown), feature deblurring module and image reconstruction module (such as figure 2 shown).

[0031] S2. Obtain the original video sequence, and use the spatiotemporal attention module (branch one) in the video deblurring to extract the spatial local and global information of different positions between video frames, and the similari...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a neural network video deblurring method based on multi-attention mechanism fusion. The method comprises the following steps: S1, constructing a video deblurring model; S2, acquiring an original video sequence, and extracting spatial local and global information of different positions between video frames and similarity information between continuous video frames by using aspatial-temporal attention module in video deblurring; S3, capturing low-frequency and high-frequency different types of information of the input fuzzy video sequence by using a channel attention module in the video deblurring model; S4, fusing the extracted different information to obtain deblurred features, and mapping the deblurred features into an image from a feature space by using an imagereconstruction module to obtain a clear intermediate frame; and S5, calculating content loss and perception loss of the recovered intermediate frame and the corresponding clear image, and performing back propagation to train a network model. Effective deblurring processing can be carried out on the blurred video to obtain clear and real video data.

Description

technical field [0001] The invention relates to the technical field of video image processing, in particular to a neural network video deblurring method based on multi-attention mechanism fusion. Background technique [0002] Video is an advanced human perception. Therefore, video and images play an important role in human perception. Relevant studies have shown that among all the information received by humans, information such as videos and images account for as high as 75%. With the rapid development of big data, artificial intelligence and other technologies, digital video has become an indispensable part of people's daily life. However, due to the inherent physical limitations of imaging equipment and external environmental interference (jitter, occlusion, illumination changes of imaging equipment, relative motion between the equipment and the target scene) during the processing of video images, it is impossible to Avoidance of varying degrees of video image degradat...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06T2207/10016G06T2207/20221G06N3/045G06F18/22G06T5/73Y02T10/40
Inventor 张笑钦王涛蒋润华赵丽
Owner WENZHOU UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products