A Euler Video Color Enlargement Method Based on Deep Learning

A color amplification, deep learning technology, applied in color TV, color signal processing circuit, color TV components and other directions, can solve the problem of inability to separate color signal and motion signal, spatial decomposition process relying on manual design, unable to take into account static and dynamic Scene and other problems, to achieve the effect of improving Euler video color amplification technology, reducing the difficulty of preparation, and avoiding manual design

Active Publication Date: 2022-04-12
ZHEJIANG UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

First of all, the spatial decomposition process of the algorithm relies on manual design, and cannot separate the color signal from the motion signal, resulting in the amplification of the color change while the motion is also amplified, which will interfere with the color amplification result
Secondly, the existing time-domain filtering methods cannot take into account both static and dynamic scenes, and there are obvious artifacts in the amplification results in dynamic scenes
Third, the amplification results often have serious noise and are prone to distortion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Euler Video Color Enlargement Method Based on Deep Learning
  • A Euler Video Color Enlargement Method Based on Deep Learning
  • A Euler Video Color Enlargement Method Based on Deep Learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0083] Embodiment 1. The inventor tested the effectiveness of the Euler video color amplification method based on deep learning in a static scene. like Figure 5 As shown, the Face video is enlarged, and the head remains still in the video. From the enlarged result, it is not difficult to see that the present invention amplifies the color change caused by the blood pulse that was originally invisible on the face. Comparing the magnification result of the method of the present invention with the linear Euler video color magnification method, it can be seen from the time slice diagram that the magnification effect of the present invention is more obvious, the noise is less, and the picture is clearer.

Embodiment 2

[0084] Embodiment two, such as Figure 6 As shown, the dynamic scene is enlarged, and the Bulb video hand-held light bulb moves upwards. Compared with the method of the present invention and the linear Euler video color amplification method, it is obvious that the method of the present invention has a strong amplification effect. Figure 7 It is shown that step (1.4) optimizes the FIR bandpass filter so that artifacts will not be generated in dynamic scenes.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an Euler video color amplification method based on deep learning, which includes two steps of obtaining a color amplification network and calling a color amplification network. Firstly, a data set of simulated small color change pictures is synthesized, and a color amplification network constructed by four modules of spatial decomposition, differential filtering, amplification processing, and image reconstruction is trained on the data set; at runtime, the FIR bandpass filter is optimized and replaces the differential filtering module of the network. Step 2: When performing color amplification processing based on a given input video, the video is first decomposed into a frame sequence, and the color amplification frame sequence is generated by calling the color amplification network, and finally the color amplification video is synthesized. Compared with the linear amplification method, step 1 of the method of the present invention adopts a deep learning model, the training process is automated, and cumbersome manual design is eliminated; the processing of step 2 greatly reduces noise, does not generate artifacts in dynamic scenes, and enhances zoom effect.

Description

technical field [0001] The invention relates to the field of image and video processing, in particular to a deep learning-based Euler video color amplification method. Background technique [0002] Some color signals that exist objectively but cannot be directly observed by the naked eye contain rich information, such as periodic changes in skin color with blood circulation, local skin color changes caused by slight pressing, etc. The Euler video color amplification method has an excellent effect on enhancing the color changes that are difficult to perceive by the naked eye. It can capture and amplify the tiny color signals in the video, so that it can be directly observed by the human eye, providing a better way for interpreting such information. a visualization method. [0003] But there are some deficiencies in the methods that have been realized so far. First, the spatial decomposition process of the algorithm relies on manual design, and cannot separate the color sign...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): H04N9/64
CPCH04N9/646H04N9/648
Inventor 任重周昆邹锦爽
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products