Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Euler video color amplification method based on deep learning

A technology of color amplification and deep learning, applied to color TV, color TV parts, color signal processing circuits, etc., can solve the problems of color amplification result interference, spatial decomposition process relying on manual design, noise, etc.

Active Publication Date: 2021-06-08
ZHEJIANG UNIV
View PDF11 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

First of all, the spatial decomposition process of the algorithm relies on manual design, and cannot separate the color signal from the motion signal, resulting in the amplification of the color change while the motion is also amplified, which will interfere with the color amplification result
Secondly, the existing time-domain filtering methods cannot take into account both static and dynamic scenes, and there are obvious artifacts in the amplification results in dynamic scenes
Third, the amplification results often have serious noise and are prone to distortion

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Euler video color amplification method based on deep learning
  • Euler video color amplification method based on deep learning
  • Euler video color amplification method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0083] Embodiment 1. The inventor tested the effectiveness of the Euler video color amplification method based on deep learning in a static scene. like Figure 5 As shown, the Face video is enlarged, and the head remains still in the video. From the enlarged result, it is not difficult to see that the present invention amplifies the color change caused by the blood pulse that was originally invisible on the face. Comparing the magnification result of the method of the present invention with the linear Euler video color magnification method, it can be seen from the time slice diagram that the magnification effect of the present invention is more obvious, the noise is less, and the picture is clearer.

Embodiment 2

[0084] Embodiment two, such as Image 6 As shown, the dynamic scene is enlarged, and the Bulb video hand-held light bulb moves upwards. Compared with the method of the present invention and the linear Euler video color amplification method, it is obvious that the method of the present invention has a strong amplification effect. Figure 7 It is shown that step (1.4) optimizes the FIR bandpass filter so that artifacts will not be generated in dynamic scenes.

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an Euler video color amplification method based on deep learning. The method comprises two steps of obtaining a color amplification network and calling the color amplification network. The method comprises the first step of synthesizing a picture data set for simulating a tiny color change, and training a color amplification network constructed by a spatial decomposition module, a differential filtering module, an amplification processing module and an image reconstruction module on the data set; during operation, optimizing a FIR band-pass filter, and replacing the FIR band-pass filter with a differential filtering module of the network; and the step 2 of, when color amplification processing is carried out according to a given input video, decomposing the video into a frame sequence, generating a color amplification frame sequence by calling the color amplification network, and finally synthesizing a color amplification video. Compared with a linear amplification method, the method has the advantages that a deep learning model is used in the first step, the training process is automatic, and tedious manual design is omitted; and through processing in the step 2, noise is greatly reduced, artifacts cannot be generated in a dynamic scene, and the amplification effect is enhanced.

Description

technical field [0001] The invention relates to the field of image and video processing, in particular to a deep learning-based Euler video color amplification method. Background technique [0002] Some color signals that exist objectively but cannot be directly observed by the naked eye contain rich information, such as periodic changes in skin color with blood circulation, local skin color changes caused by slight pressing, etc. The Euler video color amplification method has an excellent effect on enhancing the color changes that are difficult to perceive by the naked eye. It can capture and amplify the tiny color signals in the video, so that it can be directly observed by the human eye, providing a better way for interpreting such information. a visualization method. [0003] But there are some deficiencies in the methods that have been realized so far. First, the spatial decomposition process of the algorithm relies on manual design, and cannot separate the color sign...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): H04N9/64
CPCH04N9/646H04N9/648
Inventor 任重周昆邹锦爽
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products