Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A joint learning deep network model for optical flow estimation and denoising of video images

A deep network and video image technology, applied in the field of image processing, can solve the problems of poor denoising effect, low accuracy of optical flow estimation, and long time consumption

Active Publication Date: 2021-04-20
NANHUA UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to overcome the above disadvantages, the present invention aims to provide a joint learning deep network model for optical flow estimation and denoising of video images, using the deep network model to jointly learn optical flow estimation and video denoising from a large number of training samples to achieve Solve the problems of low optical flow estimation accuracy, poor denoising effect and long time consumption in the existing technology

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A joint learning deep network model for optical flow estimation and denoising of video images
  • A joint learning deep network model for optical flow estimation and denoising of video images
  • A joint learning deep network model for optical flow estimation and denoising of video images

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0027] Such as figure 1 As shown, this embodiment provides a joint learning deep network model for optical flow estimation and denoising of video images, including three modules: a preprocessing module, an optical flow estimation module and a denoising module. The sample data set, where each sample includes two adjacent frames of noise images n1 and n2 in the video, such as figure 2 As shown, noise images n1 and n2 correspond to standard defini...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a joint learning deep network model for optical flow estimation and denoising of video images, which belongs to the field of image processing. The model includes a preprocessing module, an optical flow estimation module, and a denoising module. Each module adopts the Encoder-Decoder network structure. Using the sample data set, the preprocessing module is first trained separately, and then the relevant parameters of the preprocessing module are fixed, and the preprocessing module is trained at the same time. module and the optical flow estimation module, and finally fix the relevant parameters of the preprocessing module and the optical flow estimation module. The overall training includes a deep network model of three modules. Using the trained deep network model, the optical flow estimation and removal of noisy video images can be directly performed. noise processing. The joint learning deep network model proposed by the present invention has fast optical flow estimation and denoising speed and high precision, and is convenient for quickly processing a large number of video images in practice.

Description

technical field [0001] The invention relates to the field of image processing, and specifically refers to a joint learning deep network model for optical flow estimation and denoising of video images. Background technique [0002] Video images are faced with noise interference during acquisition, compression, storage, transmission and other links. Noise will significantly reduce the visual quality of video images and cause difficulties for subsequent intelligent analysis such as target recognition and tracking. Therefore, it is necessary to remove the noise in the video image under the premise of retaining the video information, improve the signal-to-noise ratio and improve the visual effect. [0003] Because video images have time-domain correlation, optical flow estimation and video denoising can be combined to obtain better denoising effects. However, the existing joint optical flow estimation and video denoising algorithms require a large number of iterative operations a...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06T5/00G06N3/08G06N3/04
CPCG06N3/08G06T5/002G06N3/045
Inventor 李望秀
Owner NANHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products