Supercharge Your Innovation With Domain-Expert AI Agents!

Self-supervised video deblurring and image frame insertion method based on event camera

A blurred image and deblurring technology, which is applied in the field of image processing, can solve problems such as performance degradation, achieve low latency, solve motion blur and information loss between frames, and achieve good deblurring and image interpolation effects

Pending Publication Date: 2022-05-13
WUHAN UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The current deblurring and image interpolation methods based on event cameras often treat these two tasks separately, but in fact, motion blur and inter-frame information loss have a strong symbiosis in real scenes, so they need to be considered jointly
In addition, most of the current deblurring and image frame interpolation algorithms rely on supervised training on simulation datasets, and due to the inconsistent data distribution between simulation datasets and real datasets, they often have performance degradation in real scene testing problem, spawning the need for a self-supervised framework

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Self-supervised video deblurring and image frame insertion method based on event camera
  • Self-supervised video deblurring and image frame insertion method based on event camera
  • Self-supervised video deblurring and image frame insertion method based on event camera

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0059] In order to understand the present invention more clearly, the technical contents of the present invention are introduced in detail below.

[0060] Use ordinary optical cameras and event cameras to shoot high-dynamic scenes, such as fast-moving objects or cameras, collect low frame rate, video with motion blur, and construct fuzzy video datasets by spatiotemporal matching of blurred image frames and event streams. Due to the limited scale of field data, data augmentation methods can be used for sample expansion. Deep learning is a data-driven method. The larger the training data set, the stronger the generalization ability of the trained model. However, when collecting data in practice, it is difficult to cover all scenarios, and collecting data also requires a lot of cost, which leads to a limited training set in practice. If you can generate various training data based on existing data, you can achieve better open source and cost reduction, which is the purpose of da...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a self-supervised video deblurring and image frame insertion method based on an event camera. The method comprises the steps of constructing a fuzzy video and event stream data set, preprocessing event streams, constructing a learnable double integral network and a fusion network to process the fuzzy video and the event streams, performing network training through a self-supervised framework, and reconstructing a high-frame-rate clear video. The event stream preprocessing method is established through a joint event generation model and a blurred image generation model, and it is guaranteed that potential clear image reconstruction can be conducted at any target moment inside and outside the blurred video exposure time. And inputting the preprocessed event stream and the blurred image into a learnable double integral network and a fusion network to carry out motion blurring removal and image frame insertion. And finally, constructing a self-supervision framework to train a network model by utilizing the mutual relation among the fuzzy video, the clear video and the event stream, and processing the fuzzy video and the event stream by using the trained model to reconstruct the high-frame-rate clear video. According to the method, the problems of motion blur and inter-frame information loss are solved, and good deblurring and image frame insertion effects are achieved.

Description

technical field [0001] The invention belongs to the field of image processing, in particular to realizing self-supervised video deblurring and image frame interpolation by using an event camera. Background technique [0002] In high-dynamic scenes (such as fast-moving cameras or non-linear moving objects), video shooting often faces the problem of motion blur and loss of inter-frame information, thereby reducing the overall video quality. Motion deblurring and frame interpolation, as important branches in the field of underlying visual image processing, can reconstruct clear, high-frame-rate videos from blurred, low-frame-rate videos, providing clear and smooth The video look and feel is also convenient for subsequent algorithm processing such as target detection, tracking, and 3D reconstruction, which has extremely high application value. [0003] However, due to the ambiguity of the motion direction and the erasure of the target edge information by the blurred image, it i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06T5/50G06N3/04G06N3/08
CPCG06T5/50G06N3/08G06T2207/10016G06T2207/20081G06T2207/20084G06N3/045G06T5/73
Inventor 余磊张翔
Owner WUHAN UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More