Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video motion amplification method based on improved self-encoding network

A self-encoding network and motion amplification technology, applied in the field of video motion amplification, can solve the problems of image texture color loss, artifact and contour deformation, checkerboard effect, etc., to achieve the effect of maintaining effectiveness, improving performance, and low demand

Pending Publication Date: 2022-02-08
CHINA THREE GORGES UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] The technical problem addressed by the present invention is that the existing video motion amplification method based on deep learning has image distortion, artifacts and contour deformation, and the fusion of image texture features and shape features has the phenomenon of color loss or even checkerboard effect

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video motion amplification method based on improved self-encoding network
  • Video motion amplification method based on improved self-encoding network
  • Video motion amplification method based on improved self-encoding network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] Such as figure 1 As shown, the modified autoencoder network of the example includes an encoder, an amplifier, and a decoder. The encoder includes a texture feature extraction unit and a shape feature extraction unit. The texture feature extraction unit adopts a channel attention mechanism, and the shape feature extraction unit includes a convolutional layer, a deformable convolution, and a residual block, such as figure 2 shown.

[0037] Such as image 3As shown, the decoder of the modified autoencoder network consists of sequentially connected feature fusion layers, 9 residual blocks, upsampling layers, channel attention units, and convolutional layers.

[0038] The video motion amplification method based on the improved autoencoder network includes the following steps:

[0039] Step 1: Decompose the video data, using the decomposed two consecutive frames of images I A , I B As the input to the encoder, I A Indicates the first frame image of two consecutive fram...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a video motion amplification method based on an improved self-encoding network, and uses the improved self-encoding network to amplify subtle changes in a video. The method comprises the following steps: decomposing video data, and adopting two continuous frames of decomposed images as the input of an encoder; using an encoder to extract shape features of two continuous frames of images as input of an amplifier; amplifying the pixel displacement difference value of the shape features of the two frames of images by using an amplifier to obtain amplified shape features; and performing up-sampling on the texture features of the previous frame image by using a decoder, and combining the texture features with the amplified shape features to obtain an amplified frame and output the amplified frame. According to the method provided by the invention, perfect fusion of shape and texture features in the video motion amplification image is realized, the brightness, color and texture loss of video motion amplification is reduced, and shallow feature information is reserved.

Description

technical field [0001] The invention belongs to the field of image processing, and in particular relates to a video motion amplification method based on an improved self-encoding network. Background technique [0002] Most of the research is often aimed at information that people can easily observe with the naked eye, and we cannot use computers to process some important change information in videos. In response to this research status, a video motion amplification technology called "moving microscope" proposed by the MIT team can assist in obtaining these important information. Video motion amplification technology can amplify the subtle changes in the video to the extent that the naked eye can observe, such as detecting blood circulation and micro-expression recognition. However, with the continuous increase of video magnification, video image information will have excessive blur and a large number of noise artifacts, which may cause the outline of moving objects to disap...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06V20/40G06V10/44G06V10/46G06V10/56G06V10/80G06V10/82G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/048G06N3/045G06F18/253
Inventor 但志平张骁李勃辉方帅领
Owner CHINA THREE GORGES UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products