Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

The invention discloses a video restoration model training method based on a deep network and a video restoration method

A model training and deep network technology, applied in the field of image processing, can solve the problems of inability to achieve end-to-end image restoration, low efficiency of video restoration methods, complicated use, etc., achieve good practical value, improve restoration effect, and increase processing speed Effect

Inactive Publication Date: 2019-06-28
HUAZHONG UNIV OF SCI & TECH
View PDF9 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The patent "A method for removing non-uniform motion blur in images based on deep neural network" (application number: CN 104680491) proposes to use a deep convolutional network to estimate the point spread function of blurred image blocks, and then use the optimized Markov random field The model obtains the pixel-by-pixel point spread function of the blurred image. Finally, based on the estimated point spread function, a clear image is obtained using the image non-blind restoration algorithm. This method successfully introduces deep learning into image restoration, but this method needs to pass through the deep network first. Obtain the point spread function of the blurred image, and then use the non-blind rolling algorithm to restore the blurred image, which cannot achieve end-to-end image restoration, and it is more complicated to use in practical applications
[0005] Generally speaking, the existing blurred image restoration methods have certain limitations, which makes the video restoration method based on blurred image restoration also have the problems of low efficiency and poor effect.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • The invention discloses a video restoration model training method based on a deep network and a video restoration method
  • The invention discloses a video restoration model training method based on a deep network and a video restoration method
  • The invention discloses a video restoration model training method based on a deep network and a video restoration method

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0044] In order to make the object, technical solution and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the accompanying drawings and embodiments. It should be understood that the specific embodiments described here are only used to explain the present invention, not to limit the present invention. In addition, the technical features involved in the various embodiments of the present invention described below can be combined with each other as long as they do not constitute a conflict with each other.

[0045] The video restoration model training method based on deep network provided by the present invention, such as figure 1 shown, including:

[0046] (1) Obtain multiple frames of clear images from standard clear videos;

[0047] In an optional embodiment, step (1) specifically includes:

[0048] Obtain multiple frames of standard images from standard clear videos, and cut each frame of sta...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video restoration model training method based on a deep network and a video restoration method. The method comprises the following steps: obtaining a plurality of frames of clear images from a standard clear video; Carrying out Gaussian blur processing on each frame of clear image to obtain a blurred image corresponding to each frame of clear image; Taking a {clear imagesequence, fuzzy image sequence} pair consisting of continuous n frames of clear images and corresponding n frames of fuzzy images as a training sample, thereby obtaining a training set consisting of all training samples; Establishing a video restoration model that is connected by n-1 codec networks in turn and used for image restoration of blurred image In based on blurred image In and its previous n-1 frame blurred images In-1-I1; And training the video restoration model by using the training set so as to obtain a target video restoration model. According to the invention, the restoration efficiency and restoration effect of the blurred video can be improved.

Description

technical field [0001] The invention belongs to the technical field of image processing, and more specifically relates to a deep network-based video restoration model training method and a video restoration method. Background technique [0002] With the rapid development of informatization, the use of electronic products such as smart phones and VR equipment is becoming more and more popular. At the same time, technologies such as video surveillance and video transmission are more and more widely used. In the process of video shooting or transmission, it is easy to cause loss or distortion of video information due to various reasons, resulting in blurred video. [0003] Image restoration has always been an important research direction in the field of image processing. Early image restoration techniques divide image restoration into non-blind image restoration and blind image restoration according to whether the point spread function (PSF) of the blurred image is known. Bec...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T5/00G06N3/04G06N3/08
Inventor 桑农武理友李乐仁瀚李亚成高常鑫
Owner HUAZHONG UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products