A face synthesis method based on a generative adversarial network

A face synthesis and network technology, applied in biological neural network models, neural learning methods, instruments, etc., can solve problems such as blurred image quality, easy imbalance of multiple networks, and errors in synthetic images

Active Publication Date: 2019-04-16
SUN YAT SEN UNIV
View PDF11 Cites 22 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0005] Aiming at the problems of the existing generative confrontation network, the image quality generated by a single network is weak and the image quality is relatively blurred, and the imbalance between multiple networks is easy to cause errors in the synthetic image. This invention proposes a face synthesis method based on the generative confrontation network. The technical scheme adopted in the present invention is:

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A face synthesis method based on a generative adversarial network
  • A face synthesis method based on a generative adversarial network
  • A face synthesis method based on a generative adversarial network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0070] Figure 1~4 As shown, a face synthesis method based on generating a deep network, including constructing and training an optimized TTGAN model, the TTGAN model is composed of two GAN networks interacting with each other, and the multi-level sparse expression model is consistent with the three transformations Constraints to build a model loss item; then use the trained and optimized TTGAN model to perform face synthesis steps, where the steps to train the TTGAN model are as follows:

[0071] The TTGAN model is composed of two generative confrontation networks with the same structure but opposite face synthesis tasks, which are combined through a loop interaction. Each generative confrontation network GAN is also divided into a generator G and a discriminator D matching combination. The task of the generator is to synthesize faces, and the task of the discriminator is to distinguish between real and synthetic faces. The generator of TTGAN adopts the U-net structure of th...

Embodiment 2

[0123] This embodiment compares the present invention with prior art Pix2Pix GAN and CycleGAN:

[0124] For an objective and fair comparison, this experiment keeps the common basic structure of TTGAN and CycleGAN consistent, while only changing the newly proposed and added structure, the Pix2Pix GAN structure and hyperparameters will keep the default settings of the model. At the same time, the training data set and test set, as well as the number of training times, keep each model consistent.

[0125] 1) Based on the AR face database, facial expression image synthesis.

[0126] a. Randomly select image pairs of 84 people with expressionless normal faces and smiling faces as the training set, and the corresponding image pairs of the other 16 people as the test set.

[0127] b. Use the training set to train TTGAN, CycleGAN and Pix2Pix GAN.

[0128] c. Use the test set to test TTGAN, CycleGAN and Pix2Pix GAN respectively.

[0129] The comparison of images generated by each mo...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

On a synthesis task of a human face, a multilevel sparse expression three-time conversion virtual generation neural network TTGAN is constructed based on an adversarial generation network CycleGAN architecture. The TTGAN proposes and joins a multi-level sparse representation model and a three-time conversion consistency constraint, and the TTGAN is a result under the synergistic effect of a plurality of generative adversarial networks for the target face synthesis of a face image pair. Wherein the multi-level sparse representation model is used for constraining features extracted by differentfeature extraction layers of a generated network in an input picture, including identity information related to a target image; The three times of conversion consistency constraint utilizes three different samples which contain network state information and are generated by one time of circulation of the model, so that the two generative adversarial networks of the whole model are guided to cooperate with each other. The multi-level sparse representation and the three-time conversion consistency constraint provided by the TTGAN further increase the image generation capability of the CycleGAN,so that the synthesized face image can obtain a better result in the aspects of keeping face identity information and showing more reality.

Description

technical field [0001] The present invention relates to the field of face synthesis and generation network, and more specifically, to a face synthesis method based on generation confrontation network. Background technique [0002] Face image synthesis is one of the most important research fields of machine vision, and it is applied to face recognition, image restoration, virtual reality and other related technologies. In the technical development of face synthesis, the diversity of generated faces and the maintenance of face identity are two different technical difficulties. Part of the reason is the mapping between attribute variables such as gestures and expressions and high-dimensional representations of face images. Learning is one of the unsolved problems in academia. Another part of the reason is that illumination, posture, occlusion, etc. have greatly changed the pixels of face images. Compared with the very robust performance of humans, the existing algorithms still ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06N3/08
CPCG06N3/084G06V40/16Y02T10/40
Inventor 杨猛叶林彬
Owner SUN YAT SEN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products