Deep fake face video positioning method based on space-time fusion

A spatio-temporal fusion and deep technology, applied in the fields of image processing and image recognition, can solve the problems of affecting the generalization ability and functional integrity of the recognition system, reducing the recognition accuracy, ignoring time domain features, etc. Effect

Active Publication Date: 2021-06-22
XIDIAN UNIV
View PDF5 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to address the deficiencies of the above-mentioned prior art, to provide a method for locating fake human face videos based on a spatio-temporal fusion multi-task model, which is used to solve the problems in the recognition process of fake videos caused by ignoring the temporal features of fake videos. The reduction of recognition accuracy, and the generalization ability and functional perfection of the recognition system are affected due to the neglect of unseen attack categories and task simplification.
Construct a multi-task fusion network and a multi-task fusion loss function. Since the network integrates related tasks with common characteristics for network training, it is used to solve the problem of ignoring unseen attack categories and single tasks, which will affect the generalization ability and function of the recognition system. completeness problem

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deep fake face video positioning method based on space-time fusion
  • Deep fake face video positioning method based on space-time fusion
  • Deep fake face video positioning method based on space-time fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0053] The present invention will be described in further detail below in conjunction with the accompanying drawings.

[0054] refer to figure 1 , to further describe in detail the specific steps of the present invention.

[0055] Step 1, construct a convolutional neural network.

[0056] Build a 13-layer convolutional neural network, the structure of which is: the first convolutional layer, the second convolutional layer, the first pooling layer, the third convolutional layer, the fourth convolutional layer, and the second pooling layer , the fifth convolutional layer, the sixth convolutional layer, the seventh convolutional layer, the third pooling layer, the eighth convolutional layer, the ninth convolutional layer, and the tenth convolutional layer.

[0057] Set the size of the convolution kernels of the first to tenth convolution layers to 3×3, and the number of convolution kernels to 64, 64, 128, 128, 256, 256, 256, 512, 512, 512 , the step size is set to 1, the first...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a deep fake face video positioning method based on space-time fusion. The method comprises the following steps: (1) constructing a convolutional neural network; (2) constructing a classification network fusing time domain and space domain features; (3) constructing a segmentation positioning task network; (4) constructing a reconstruction task network; (5) constructing a multi-task fusion network; (6) generating a multi-task fusion loss function; (7) generating a training set; (8) training the multi-task fusion network; (9) identifying and positioning the deeply-forged face video. According to the method, the classification network fusing the time domain and space domain features is constructed to extract the features, more complete intra-frame and inter-frame features can be extracted, and higher accuracy is obtained; meanwhile, the multi-task fusion loss function used for training the multi-task fusion network is constructed, and the accuracy of the multi-task fusion network is improved. The problem that generalization ability and function completeness are affected due to the fact that no attack category exists and tasks are simplified is solved.

Description

technical field [0001] The invention belongs to the technical field of image processing, and further relates to a method for locating deeply fake human face videos based on spatio-temporal fusion in the technical field of image recognition. The invention can be applied to forgery verification of videos containing human faces and marking of forged regions. Background technique [0002] Deepfakes refer to any realistic audiovisual content produced with the help of deep learning, as well as the techniques used to create such content. With the continuous development of deep learning technology, the threshold for using deep forgery generation technology is lowered, the sensory effect is more and more realistic, the robustness is gradually improved, and the data dependence is gradually reduced. Existing deep forgery methods make the demand for the current deep forgery authentication system with high generalization ability continue to increase. [0003] The current counterfeiting...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/168G06V40/172G06N3/044G06N3/045G06F18/2415G06F18/253G06F18/214
Inventor 田玉敏吴自力王笛蔡妍潘蓉
Owner XIDIAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products