The invention discloses a refined
feature fusion method for face counterfeit
video detection and relates to the field of mode recognition. The method comprises steps of carrying out the frame
decomposition of a true and false face video, and converting a video format file into a continuous
image frame sequence; performing face position detection on the continuous
image frame sequence, and adjusting a detection result to enable a face frame to contain a background;
cutting a face frame for each frame of image to obtain a face image
training set, and training an EfficentNet B0 model; randomly selecting N continuous frames from the face
image sequence, and inputting the N continuous frames into an EfficentNet B0 model to obtain a feature map group; and decomposing the feature map group into independent feature maps, re-stacking the feature maps of the same channel according to an original sequence order to obtain a new feature map group, performing secondary
feature extraction to obtain afeature vector, connecting the
feature vector to a single
neuron, and performing final video clip true and false classification by taking sigmoid as an
activation function. According to the method, the
spatial domain information is reserved, the
time domain information is fully extracted, and counterfeiting detection precision is effectively improved.