DeepFake defense method and system based on visual adversarial reconstruction

A visual and coding technology, applied in character and pattern recognition, fraud detection, computer components, etc., can solve the problems of destroying DeepFake and easily being destroyed by adversarial perturbation

Active Publication Date: 2022-02-01
INST OF AUTOMATION CHINESE ACAD OF SCI
View PDF6 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, while breaking DeepFake, the added adversarial pert

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • DeepFake defense method and system based on visual adversarial reconstruction
  • DeepFake defense method and system based on visual adversarial reconstruction
  • DeepFake defense method and system based on visual adversarial reconstruction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0060] The invention discloses a DeepFake defense method based on visual confrontation reconstruction, wherein the DeepFake is a deep fake. figure 1 It is a flowchart of a DeepFake defense method based on visual confrontation reconstruction according to an embodiment of the present invention, such as figure 1 As shown, the method includes:

[0061] Step S1, prepare face data set; face generator, remember to do G (•); target DeepFake model, denote F (•); Described human face generator and target DeepFake model are the existing network with known structure;

[0062] Step S2, design a face encoder, remember to do E (•); design a face discriminator, write D (•); Apply the sample x in the face data set to do the confrontation training of the face encoder and the face discriminator; apply the face encoder after the confrontation training to obtain the initial hidden space code z 0 , fine-tuning the face encoder after the adversarial training with the initial latent space code...

Embodiment 2

[0101] The invention discloses a DeepFake defense system based on visual confrontation reconstruction. Figure 7 It is a structural diagram of a DeepFake defense system based on visual confrontation reconstruction according to an embodiment of the present invention; as Figure 7 As shown, the system 100 includes:

[0102] The first processing module 101 is configured to prepare a face data set; the face generator is denoted as G (•); target DeepFake model, denote F (•); Described human face generator and target DeepFake model are the existing network with known structure;

[0103] The second processing module 102 is configured to design a face encoder, denoted as E (•); design a face discriminator, write D (•); Apply the sample x in the face data set to do the confrontation training of the face encoder and the face discriminator; apply the face encoder after the confrontation training to obtain the initial hidden space code z 0 , fine-tuning the face encoder after the adv...

Embodiment 3

[0129] The present invention: discloses an electronic device. The electronic device includes a memory and a processor. The memory stores a computer program. When the processor executes the computer program, the steps in any one of the DeepFake defense methods based on visual confrontation reconstruction in any one of the disclosed embodiments of the present invention are implemented.

[0130] Figure 8 It is a structural diagram of an electronic device according to an embodiment of the present invention, such as Figure 8 As shown, the electronic device includes a processor, a memory, a communication interface, a display screen and an input device connected through a system bus. Wherein, the processor of the electronic device is used to provide calculation and control capabilities. The memory of the electronic device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer programs. The internal ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a DeepFake defense method and system based on visual adversarial reconstruction. The method comprises a two-stage method. In a first stage, real face data are converted into potential codes, and real images can be realistically reconstructed through a generator. The problem is regarded as a reverse problem of the GAN, an encoder is trained to generate potential embedding, the potential embedding is used as initialization, and then fine adjustment is carried out on the potential embedding. In a second stage, a search is performed in the neighborhood of the potential embedding obtained in the first stage to obtain an optimal embedding, which can result in a perfect reconstruction and invalidate the DeepFake. In the process, the potential embedding is optimized by using gradient information from the target DeepFake model and is limited in a small modification range so as to meet the visual similarity requirement.

Description

technical field [0001] The invention belongs to the field of DeepFake defense, in particular to a DeepFake defense method and system based on visual confrontation reconstruction. Background technique [0002] With the advent of sophisticated image and video synthesis techniques, especially Generative Adversarial Networks (GAN)~\cite{goodfellow2014generative}, it has become easier and easier to generate high-quality, convincing fake videos. DeepFake~\cite{choi2018stargan,karras2019style,karras2020analyzing} is a new genre of synthetic video in which the face of a subject is modified to a target face to simulate the target subject in a specific environment and create convincing videos of real events. To this end, effective measures should be formulated to combat such DeepFakes to protect personal security and privacy. [0003] Existing deepfake defense techniques mainly focus on passive detection, i.e. exploiting artifacts in generated fake faces to detect them. Specifically...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06V40/16G06V40/40
Inventor 董晶王伟彭勃何子文项伟谭铁牛
Owner INST OF AUTOMATION CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products