Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Figure video generation method based on generative adversarial network

A network and video technology, applied in the field of computer vision and image processing, can solve the problems that static images cannot meet the needs, achieve high practical value and promotion value, ensure accuracy, and clear logic

Active Publication Date: 2022-02-25
HARBIN INST OF TECH SHENZHEN GRADUATE SCHOOL
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In practical applications, the generation of static images often cannot meet the needs. On the contrary, videos with dynamic properties can provide better user interaction experience.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Figure video generation method based on generative adversarial network
  • Figure video generation method based on generative adversarial network
  • Figure video generation method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The technical solution of the present invention will be further described below in conjunction with the accompanying drawings, but it is not limited thereto. Any modification or equivalent replacement of the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention should be covered by the present invention. within the scope of protection.

[0041] This embodiment provides a character video generation model based on the generation confrontation network, such as Figure 2-5 As shown, the model consists of two parts, the generator and the discriminator, where:

[0042] Described generator is made up of multi-scale feature extraction module (convolutional network with multiple subsampling convolutional layers), global-local module, texture renderer (based on SPADE network);

[0043] The discriminator is composed of a spatial consistency discriminator and a temporal consistency discriminator. ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a generative adversarial network-based figure video generation method. The method comprises the following steps of 1, collecting an original image and a target image; 2, for the collected original images and target images, extracting multi-scale features between a target attitude and the plurality of original images by using a multi-scale feature extraction module; 3, taking the multi-scale features as input of a global-local module, establishing a global corresponding relation between the target attitude features and original image features by using a global module, and then correcting an output result of the global module by using a local module; 4, selecting an original image, reconstructing the original image to a specific size by adopting pooling operation, and performing deformation operation by using the corrected flow field to obtain a final feature map; and 5, mapping the feature map from the feature space to the image space by adopting a texture renderer to obtain a final generated image. The video can be generated according to the target posture while the clothing texture of the original image is kept unchanged.

Description

technical field [0001] The invention belongs to the technical field of computer vision and image processing, and relates to a character video generation method based on a generative confrontation network. Background technique [0002] The generative model is the core of the field of computer vision. In recent years, methods such as GAN and VAE have achieved impressive results in various image-based generative tasks. In contrast, video-based generative tasks have made little progress, especially Generate a video containing character images (also known as animation generation), because in addition to ensuring that each frame of image generated is real, it is also necessary to ensure the timing consistency of the generated video frames. In practical applications, the generation of static images often cannot meet the needs. On the contrary, videos with dynamic properties can provide better user interaction experience. Character video generation not only needs to ensure that the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06T13/00G06N3/04G06N3/08
CPCG06T13/00G06N3/08G06N3/045Y02T10/40
Inventor 吴爱国沈世龙张颖
Owner HARBIN INST OF TECH SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products