Deepfake video detection method based on image group and two-stream network

A technology of video detection and image group, which is applied in the field of video detection and can solve the problems of large amount of calculation and low efficiency

Pending Publication Date: 2021-08-20
NANJING UNIV OF INFORMATION SCI & TECH
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In order to overcome the problems of large amount of calculation and low efficiency in the existing deepfake video detection technology, the present invention provides a deepfake video detection method based on image group and two-stream network

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Deepfake video detection method based on image group and two-stream network
  • Deepfake video detection method based on image group and two-stream network
  • Deepfake video detection method based on image group and two-stream network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0034] The present invention is described in further detail now in conjunction with accompanying drawing.

[0035] Such as figure 1 Shown is the flow chart of the present invention, and detailed steps are as follows:

[0036] (1) Extract key frames from the video to be detected to form an image group

[0037] In the video to be detected, the image of the face area is obtained by cropping with a fixed size, and the key frame is extracted by using the difference between frames as the image group input to the network. Larger frames are extracted as keyframes. Due to the strong temporal correlation between video frames, in order not to lose temporal features, the extracted 10 key frames are sequentially combined into image groups to represent the video. The calculation formula of the inter-frame difference method is as formula (1), and the calculation expression of the average intensity of the inter-frame difference is as formula (2),

[0038] absDiff i =F i -F i-1 , (1)

...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a Deepfake video detection method based on an image group and a two-stream network. The method comprises the following steps: (1) extracting key frames of a video to be detected to form an image group; (2) inputting the first frame of the image group into a spatial stream in the two-stream network to extract spatial features; (3) differentiating the remaining frames of the image group with the first frame to obtain difference images, forming a difference image sequence, and inputting the difference image sequence into the time flow in the two-flow network to extract time features; and (4) fusing the extracted spatial features and time features, and evaluating the authenticity of the video by using a dynamic routing algorithm. Compared with the prior art, the method has the advantages that the calculation redundancy is reduced by using the image group, the network is enabled to focus on the key frame, the spatial and temporal information of the key frame is fully utilized by fusing the spatial features and the temporal features, and classification is performed through the dynamic routing algorithm to obtain a more accurate evaluation result.

Description

technical field [0001] The invention belongs to the field of video detection, in particular to a deepfake video detection method based on an image group and a two-stream network. Background technique [0002] With the rise and development of artificial intelligence technology, face-swapping technology has gradually received widespread attention in the process of continuous development. The emergence of Deepfake is a breakthrough in face swapping technology, which is a technology that can replace the facial image of the source person in the video with the facial image of the target person. With the emergence and optimization of generative adversarial networks, face swapping has become easier and less visible to the naked eye. Celebrities and politicians, as public figures, have a large number of videos published on the Internet, which allows criminals to forge videos at will, so as to spread false information, create chaos, etc., and pose a threat to human society. Therefor...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/04G06N3/08G06V40/168G06V20/46G06F18/253G06F18/24Y02T10/40
Inventor 王金伟张玫瑰
Owner NANJING UNIV OF INFORMATION SCI & TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products