Face privacy protection method based on generative adversarial network
A privacy-preserving and generative technology, applied in biological neural network models, neural learning methods, digital data protection, etc., can solve problems such as unrealistic image quality, a lot of time and cost for tagging and training networks, and model inability to converge
- Summary
- Abstract
- Description
- Claims
- Application Information
AI Technical Summary
Problems solved by technology
Method used
Image
Examples
Embodiment
[0137] 1 data set
[0138] We validate the performance of FPGAN on four public datasets.
[0139] (1) CelebA dataset. The CelebA dataset contains 10,177 identities, 202,599 facial images, 5 landmark locations, and 40 binary attribute annotations per image. We selected 1,700 neutral images and 1,700 smiling images as training data, and 200 neutral images and 200 smiling images as testing data.
[0140](2) MORPH data set. This dataset contains 55,000 face images of more than 13,000 individuals with different demographic characteristics (age, gender, and 53 races). Here, we only use male data because of the limited number of female subjects. We used 1,700 long-haired man images and 1,700 short-haired man images in the MORTH dataset as training data, and 200 long-haired images and 200 short-haired images as test data.
[0141] (3) RaFD dataset. This dataset was released in 2010. Contains 8040 images with 8 facial expressions: Anger, Disgust, Fear, Joy, Sadness, Surprise, Co...
PUM
Login to View More Abstract
Description
Claims
Application Information
Login to View More 


