Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Two-stage expression animation generation method based on dual generative adversarial network

A stage and network technology, applied in the field of two-stage expression animation generation, can solve problems such as unreasonable artifacts, low generation results, and blurred resolution of generated images

Active Publication Date: 2020-10-16
HEBEI UNIV OF TECH
View PDF8 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The technical problem to be solved by the present invention is to provide a two-stage facial expression animation generation method based on a dual generative confrontation network. In the first-stage prediction map, the expression transfer network in the first stage is named FaceGAN (Face Generative Adversarial Network); in the second stage, the detail generation network is used to enrich some face details in the first-stage prediction map, and generate fine-grained second The stage predicts the map and synthesizes the video animation, and the detail generation network of the second stage is named FineGAN (Fine Generative Adversarial Network); the method of the present invention overcomes the blurred image or low resolution of the generated image and unreasonable artifacts in the generated results existing in the prior art And other issues

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Two-stage expression animation generation method based on dual generative adversarial network
  • Two-stage expression animation generation method based on dual generative adversarial network
  • Two-stage expression animation generation method based on dual generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0087] The two-stage expression animation generation method based on the dual generation confrontation network of the present embodiment, the specific steps are as follows:

[0088] The first step is to obtain the facial expression contour map of each frame of image in the dataset:

[0089] Collect the facial expression video sequence data set, use the Dlib machine learning library to extract the face in each frame of the video sequence, and simultaneously obtain 68 feature points in each face (in the field of expression transfer, 68 feature points constitute Contours of faces, eyes, mouths, and noses, and 5 or 81 feature points can also be set.), such as figure 2 As shown in the odd-numbered lines, and then use line segments to connect the feature points in sequence to obtain the expression contour map of each frame of the video sequence, as shown in figure 2 Shown in the even-numbered row, recorded as e=(e 1 ,e 2 ,···,e i ,···,e n ), where e represents the collection ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a two-stage expression animation generation method based on double generative adversarial networks. The method comprises the following steps of: firstly, extracting expressionfeatures in a target expression profile diagram by utilizing an expression migration network FaceGAN in a first stage, and migrating the expression features to a source face to generate a first-stageprediction graph; in a second stage, supplementing and enriching the details of eye and mouth regions which contribute much to expression changes in the prediction graph in the first stage by using adetail generation network FineGAN; and generating a fine-grained second-stage prediction graph and synthesizing a face video animation, wherein an expression migration network FaceGAN and the detailgeneration network FineGAN are both realized by adopting a generative adversarial network. According to the method, the two-stage generative adversarial network is provided for expression animation generation, expression conversion is carried out in the first stage, image detail optimization is carried out in the second stage, the designated area of the image is extracted through a mask vector, emphasis optimization is carried out, and meanwhile, a local discriminator is combined for use, so the generation effect of important parts is better.

Description

technical field [0001] The technical solution of the present invention relates to image data processing in computer vision, specifically a two-stage emoticon animation generation method based on dual generative confrontation networks. Background technique [0002] Facial expression synthesis refers to the transfer of expressions from the target expression reference face to the source face. The identity information of the newly synthesized source face image remains unchanged, but its expression is consistent with the target expression reference face. This technology has Gradually applied in film and television production, virtual reality, criminal investigation and other fields. Facial expression synthesis has important research value in both academia and industry. How to robustly synthesize natural and realistic facial expressions has become a challenging and hot research topic. [0003] Existing facial expression synthesis methods can be divided into two categories, namely...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06T13/40G06N3/04G06N3/08
CPCG06T13/40G06N3/08G06V40/176G06V40/171G06N3/045Y02D10/00
Inventor 郭迎春王静洁刘依朱叶郝小可于洋师硕阎刚
Owner HEBEI UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products