Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A facial expression generation method based on generative adversarial network

A facial expression and generative technology, applied in the field of computer vision, can solve problems such as the inability to specify faces, poor facial expression effects, single faces in the expression database, etc., and achieve the effect of maintaining continuity and authenticity

Active Publication Date: 2022-05-10
SHENZHEN INST OF ADVANCED TECH
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] After analysis, in the existing schemes for generating facial expression videos based on deep learning, videos are usually generated based on noise, but due to reasons such as a small facial expression database, the generated faces are relatively single, and faces cannot be specified; Video models are less effective at facial expressions

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A facial expression generation method based on generative adversarial network
  • A facial expression generation method based on generative adversarial network
  • A facial expression generation method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0021] Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that the relative arrangements of components and steps, numerical expressions and numerical values ​​set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.

[0022] The following description of at least one exemplary embodiment is merely illustrative in nature and in no way taken as limiting the invention, its application or uses.

[0023] Techniques, methods and devices known to those of ordinary skill in the relevant art may not be discussed in detail, but where appropriate, such techniques, methods and devices should be considered part of the description.

[0024] In all examples shown and discussed herein, any specific values ​​should be construed as exemplary only, and not as limitations. Therefore, other instances of the exemplary embodiment may have dif...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human facial expression generation method based on a generative confrontation network. The method includes: constructing a deep learning network model, which includes a cyclic neural network, a generator, an image discriminator, a first video discriminator, and a second video discriminator, wherein the cyclic neural network generates time-related motion vectors for input images, and generates The device takes the motion vector and the input image as input, and outputs the corresponding video frame. The image discriminator is used to judge the authenticity of each video frame. The first video discriminator judges the authenticity of the video and classifies it, and the second video discriminator controls the generated video The authenticity and smoothness of the change; use sample images containing different expression categories as input to train the deep learning network model; use the trained generator to generate face videos in real time. The present invention retains face features while generating expressions, the generated video maintains continuity and authenticity, and has generalization ability for different faces.

Description

technical field [0001] The present invention relates to the technical field of computer vision, and more specifically, to a method for generating human facial expressions based on a generative confrontation network. Background technique [0002] In terms of face generation, 3DMM (face 3D deformation statistical model) generates faces by changing parameters such as shape, texture, posture, and illumination. DRAW (Deep Recursive Writer) uses Recurrent Neural Network (RNN) to realize image generation, and Pixel CNN uses Convolutional Neural Network (CNN) instead of RNN to realize pixel-by-pixel image generation. [0003] After the emergence of generative confrontation network (GAN), it has been widely used in image generation, and more and more GAN-based models have been applied to facial expression conversion. For example, ExprGAN (Expression Editing Based on Controllable Intensity) combines conditional generative adversarial networks and adversarial auto-transcoders to achie...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/16G06V20/40G06V10/82G06N3/04G06N3/08
CPCG06N3/08G06V40/174G06V20/41G06V20/46G06N3/045
Inventor 王蕊施璠曲强姜青山
Owner SHENZHEN INST OF ADVANCED TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products