Shared updatable Deepfake video content supervision method and system

A video content and content technology, which is applied in the field of sharing and updating Deepfake video content supervision, can solve the problems of detection method accuracy reduction, detection method failure, detection effect decline, etc., to achieve fast detection efficiency, high degree of interpretation, and over-solved The effect of fitting the problem

Pending Publication Date: 2021-10-22
BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The detection method based on the intra-frame difference focuses on the detailed feature differences of the face, while ignoring the context information of the deep fake video; the detection method based on the difference between video frames depends on the number of key frames extracted, if the video is too short The detection effect is greatly reduced
At the same time, due to the fact that deepfake generation technology is constantly being updated and upgraded, existing detection methods rely on specific data sets and generation algorithms. When the new Deepfake video content comes from new deepfake technology or does not contain specific data When the sample is set, the accuracy of some detection methods will be reduced or even the detection method will fail, that is, the model has an overfitting problem.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Shared updatable Deepfake video content supervision method and system
  • Shared updatable Deepfake video content supervision method and system
  • Shared updatable Deepfake video content supervision method and system

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0026] Such as figure 1 , figure 2 As shown, in one embodiment, a kind of deepfake video content supervising method that the embodiment of the present invention provides can be shared, comprises the following steps:

[0027] Step S1: Input the Deepfake video into the preprocessing module, extract key frames of the video and capture face images as training samples;

[0028] Step S2: Extract the spatial domain and frequency domain features of the training samples, input the feature information into the SVM classification model for training, and obtain the initial content supervision model;

[0029] Step S3: Establish a shared and updatable strategy based on blockchain technology, and design an incentive mechanism to collect new and effective Deepfake video data;

[0030] Step S4: After the number of collected samples reaches the threshold, update the initial model for training, and after updating, share the Deepfake video content detection method with the sample contributors ...

Embodiment 2

[0075] Such as Figure 8 As shown, the embodiment of the present invention provides a kind of shared and updatable Deepfake video content supervision system, including the following modules:

[0076] The data preprocessing module is used to process the video data on the blockchain into sample data suitable for model training, extract key frames of the video using the method based on segment classification, and crop the frame image into a fixed square after face recognition the size of the image;

[0077] The supervision model training module is used to obtain the initial content supervision model model, by extracting the spatial domain features and frequency domain features of the sample data image respectively, cascading and normalizing them into global discriminant features, and inputting them into the SVM model training;

[0078] The incentive mechanism module is used to encourage sample contributors to upload high-quality new data, and at the same time prevent malicious a...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a shared updatable Deepfake video content supervision method and system, and the method comprises the steps: S1, inputting a Deepfake video into a preprocessing module, extracting a video key frame, and intercepting a face image as a training sample; S2, extracting spatial domain and frequency domain features of the training sample, and inputting feature information into an SVM classification model for training to obtain an initial content supervision model; S3, establishing a sharing updatable strategy based on a block chain technology, and designing an incentive mechanism to collect new and effective Deepfake video data; and S4, after the number of the collected samples reaches a threshold value, performing update training on the initial model, and after update, sharing a Deepfake video content detection method for sample contributors, and waiting for next update. Sharing of the Deepfake video content detection method can be realized, the Deepfake video content detection method is continuously updated, the problem of sample imbalance of a Deepfake video data set is effectively eliminated, the problem of overfitting is solved, and the generalization ability of a content supervision model is improved.

Description

technical field [0001] The invention relates to the fields of machine learning and Internet content supervision, in particular to a method and system for sharing and updating Deepfake video content supervision. Background technique [0002] With the development of artificial intelligence technology, deep forgery technology based on deep learning is becoming more and more mature. Through multimedia tampering tools, people's faces in videos can be tampered at will, and it is almost impossible to detect with the naked eye. With the rise of new short video content dissemination methods, Deepfake videos are spreading faster and wider, and may be used to engage in activities prohibited by laws and regulations such as endangering national security and infringing on the legitimate rights and interests of others, causing adverse effects on social stability. In December 2019, the National Internet Information Office, the Ministry of Culture and Tourism, and the State Administration of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/34G06K9/46G06K9/62H04L29/06H04N21/234H04N21/2743
CPCH04N21/23418H04N21/2743H04L63/1416G06F18/2411G06F18/214
Inventor 毛典辉赵爽郝治昊李海生左敏蔡强
Owner BEIJING TECHNOLOGY AND BUSINESS UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products