Facial expression synthesis method based on generative adversarial network

A facial expression and generative technology, applied in the field of deep learning and image processing, can solve the problems of unnatural, unrealistic, low-resolution expressions, etc., and achieve the effect of convenient and intuitive method, vivid and real expression strength

Pending Publication Date: 2020-05-22
NANJING UNIV OF SCI & TECH
View PDF2 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] However, the first type of method cannot capture the change process of different expression intensities, such as facial wrinkles, etc., resulting in unnatural and unrealistic expressions
In addition, images generated by such methods sometimes lack fine details and are often blurred or low-resolution.
The second type of method requires that the data must be marked with expression intensity. However, in practical applications, it is difficult to define the intensity of expression with a unified standard. Although this type of method can achieve fine-grained control, it has limitations.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial expression synthesis method based on generative adversarial network
  • Facial expression synthesis method based on generative adversarial network
  • Facial expression synthesis method based on generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

preparation example Construction

[0028] The present invention is based on the method for synthesizing facial expression images of a generative confrontation network, comprising the following steps:

[0029] Step 1, obtain the data of facial expressions;

[0030] Step 2, preprocessing the expression data set, first obtain the key point information of the face image, then cut the image into a uniform size according to the key point information, and divide it into training data and test data, and divide the training data according to different expression strengths Manually divided into different categories;

[0031] Step 3, constructing a generative confrontation network, and adding the discrimination information of expression strength and the sequence and correlation information between different strengths to the generative confrontation network;

[0032] Step 4, use the preprocessed expression data to train and test the generative confrontation network, adjust the parameters of the generative confrontation ne...

Embodiment 1

[0056] This embodiment takes CK+ (http: / / www.consortium.ri.cmu.edu / ckagree / ) and MUG (https: / / mug.ee.auth.gr / fed / ) data sets as examples for the present invention based on The face expression image synthesis method of generative confrontation network is researched, and the specific implementation steps are as follows:

[0057] Step 1. Download the facial expression sequence dataset from CK+ (http: / / www.consortium.ri.cmu.edu / ckagree / ) and MUG (https: / / mug.ee.auth.gr / fed / ) as Experimental data.

[0058] Step 2. After the data is acquired, it is preprocessed. In this embodiment, two expressions of happiness and surprise are taken as examples to study the proposed algorithm. In the CK+ data set, there are very few expression data, and there are only some expression labels. In order to make full use of the data, it is necessary to additionally classify the unclassified happy and surprised expressions. In the MUG data set, each subject's single expression contains some repeated se...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a facial expression synthesis method based on a generative adversarial network. The method comprises the following steps: firstly, acquiring data of facial expressions, preprocessing the facial expression data, cutting facial images into training data and test data, and manually classifying the training data into different categories according to different facial expressionintensities; then constructing a generative adversarial network, and adding judgment information of expression intensity and associated information between different expression intensities into the network; training and testing the generative adversarial network by using the preprocessed expression data, and adjusting network parameters to optimize the generative adversarial network model; and finally, selecting test data of neutral expressions and inputting the test data into the trained generative adversarial network model to obtain expression images with different intensities. According tothe method, the facial expression images with different intensities can be synthesized through the neutral expression-free facial images, the method is convenient and visual, and the synthesized expression intensity is more vivid and realistic.

Description

technical field [0001] The invention relates to the technical fields of deep learning and image processing, in particular to a method for synthesizing human facial expressions based on a generative confrontation network. Background technique [0002] Face image processing is a broad research topic in the fields of computer vision and graphics. Facial expression is not only a subtle form of body language, but also an important way for people to convey emotional information. In recent years, with the development of computer information technology and services, people increasingly hope that computer communication can show anthropomorphic emotions and provide a new sense of immersion in human-computer interaction, which also promotes the development of expression synthesis. . Facial expression synthesis has also become one of the current research hotspots, and has a wide range of applications such as human-computer interaction, virtual reality, digital entertainment and other ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06T11/40G06K9/00
CPCG06T11/40G06V40/174G06V40/172Y02T10/40
Inventor 唐金辉孙运莲柴子琪
Owner NANJING UNIV OF SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products