Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Video Generation Method Combining Variational Autoencoders and Generative Adversarial Networks

An autoencoder and encoder technology, which is applied in the field of video generation combining variational autoencoders and generative adversarial networks, can solve the problems of reducing the quality of video generation, lack of temporal continuity, and image deformation, and achieve improved inter-frame continuity. stability, ease of training, overcoming the effect of poor inter-frame continuity

Active Publication Date: 2021-04-20
ZHEJIANG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, the existing video generation methods often have problems such as insufficient time continuity between frames in the generated video and image deformation when the input information is insufficient, thereby reducing the quality of video generation

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Video Generation Method Combining Variational Autoencoders and Generative Adversarial Networks
  • A Video Generation Method Combining Variational Autoencoders and Generative Adversarial Networks
  • A Video Generation Method Combining Variational Autoencoders and Generative Adversarial Networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0054] Step 1. Take out the handwritten digital picture from the MNIST data set. If the type of the handwritten digital picture taken out is "0, 1, 4, 6, 9", a 16-frame 48×48 pixel video will be formed for the number. The number is in In the first frame, start at any position and move up and down in 16 frames; if the type of handwritten digital picture taken out is "2, 3, 5, 7, 8", a 48×48 pixel of 16 frames will be formed for the number The video, the number starts from any position in the first frame, and moves left and right in 16 frames; make a text description for the moving video of each handwritten number, such as "The digit 0 is moving up and down", "The digit 2 is moving left and right", so that 10 categories of handwritten digital moving videos are obtained, and each category of video has a corresponding text description;

[0055] Step 2, preprocessing the video data set and its text description obtained in step 1, to obtain the "video-text" data set used in the trai...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video generation method combining a variational autoencoder and a generative adversarial network, which belongs to the technical field of video generation. Variables, these hidden variables are generated through the trained variational autoencoder decoder to generate a series of related images, and the discriminator of the generated confrontation network does not directly discriminate the video, but passes the video through the variational autoencoder encoder to obtain a series Low-dimensional hidden variables, and discriminant hidden variables. This method can generate video according to the input description text, overcome the problem of poor continuity between frames in the generated video, and improve the continuity between frames of video generation. The training step is divided into training variational autoencoder and trained variational The autoencoder generates two parts of the adversarial network for basic training, making training easier and more stable.

Description

technical field [0001] The invention belongs to the technical field of video generation, and in particular relates to a video generation method combining a variational autoencoder and a generative confrontation network. Background technique [0002] In recent years, with the wide application of artificial intelligence technology in various industries, the productivity of all walks of life has been greatly improved. For example, in the production of TV programs, video generation technology can greatly reduce human work. In the industry, companies such as NVIDIA have proposed video generation technology based on generative adversarial networks to meet video generation needs in various situations. However, the existing video generation methods often have problems such as insufficient temporal continuity between frames in the generated video and image deformation when the input information is insufficient, thereby reducing the quality of video generation. [0003] Diederik P Ki...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): H04N21/2343H04N21/4402
CPCH04N21/2343H04N21/4402
Inventor 吴萌李荣鹏赵志峰张宏纲
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products