Multi-supervision face in-vivo detection method fusing multi-scale features

A multi-scale feature, live detection technology, applied in deception detection, neural learning methods, biometric recognition and other directions, can solve the problem of increasing the time complexity of detecting living bodies, and achieve improved performance and generalization ability, good robustness, The effect of high detection accuracy

Pending Publication Date: 2022-06-28
无锡致同知新科技有限公司
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, most of the current deep learning algorithms focus on the optimization of neural network models, while ignoring the effectiveness of traditional feature description operators in feature extraction. Existing l...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-supervision face in-vivo detection method fusing multi-scale features
  • Multi-supervision face in-vivo detection method fusing multi-scale features
  • Multi-supervision face in-vivo detection method fusing multi-scale features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0043] refer to Figures 1 to 3 , which is an embodiment of the present invention, provides a multi-supervised face liveness detection method fused with multi-scale features, including:

[0044] S1: Collect an image dataset and preprocess the dataset.

[0045] It should be noted that three mainstream public image datasets, OULU-NPU, CASIA-MFSD and Replay-Attack, are used;

[0046] The performance included in the OULU-NPU dataset:

[0047] Evaluate the generalization ability of the model under different lighting and backgrounds;

[0048] Evaluate the generalization ability of the model under different attack methods;

[0049] Explore the impact of different shooting equipment on model performance;

[0050] Evaluate the general capabilities of the model under different scenarios, attack methods, and shooting equipment.

[0051] The attack methods of the CASIA-MFSD dataset are divided into:

[0052] Photo attacking color-printed face photos and bending them;

[0053] The i...

Embodiment 2

[0094] refer to Figure 4 It is another embodiment of the present invention, which is different from the first embodiment in that it provides a verification test for a multi-supervised face live detection method that integrates multi-scale features, in order to add to the technical effects used in this method. The verification shows that this embodiment adopts the traditional technical solution and the method of the present invention to carry out a comparative test, and compares the test results by means of scientific demonstration, so as to verify the real effect of the method.

[0095] The experiment uses the Adam optimizer, the initial learning rate is set to 1E-4, the batch size is set to 8, the programming environment is PyTorch, and the hardware device is an NVIDIA RTX 2080Ti graphics card. In order to verify the effectiveness of the proposed multi-scale feature fusion module and multiple supervision strategy, three sets of ablation experiments are performed on the OULU-...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a multi-scale feature fused multi-supervision face in-vivo detection method, which comprises the following steps of: acquiring an image data set, and preprocessing the data set; gradient texture features are extracted through central difference convolution, and codes are fused; extracting multi-scale discriminant features through a group receptive field branch, and splicing and fusing the multi-scale discriminant features with a gradient texture branch; fusing the two features, inputting the fused features into a residual structure, carrying out deep semantic learning and coding, and inputting a result into a depth map generator and a mask generator to obtain a feature map; a depth map is used for supervision, and a binary mask is used for auxiliary supervision; and fusing output results of the depth map generator and the mask generator, calculating a prediction score, and realizing end-to-end living body detection. According to the invention, the performance and generalization ability of the network can be improved, and the method has the advantages of small parameter quantity and end-to-end detection; compared with an existing mainstream living body detection algorithm, the method is higher in detection precision and better in robustness.

Description

technical field [0001] The present invention relates to the technical field of face liveness detection, in particular to a multi-supervised face liveness detection method integrating multi-scale features. Background technique [0002] In recent years, face recognition systems have been widely used in transportation, surveillance and other fields due to their low cost and easy installation. However, the face recognition system also has certain loopholes. Attackers use the face information of legitimate users to attack the system, which causes great harm to the rights and interests of users. Commonly used fraudulent attack methods include photo, video and 3D Mask attacks. In order to solve this problem, more and more researchers have begun to pay attention to face detection technology. [0003] Face liveness detection is a technology that recognizes that the face in front of the camera is a real face and is a fraudulent face processed by devices such as photos or electronic s...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06V40/16G06V40/40G06V10/80G06V10/82G06V10/52G06N3/04G06N3/08
CPCG06N3/084G06N3/045G06F18/253
Inventor 宋晓宁陈苏阳周晋成
Owner 无锡致同知新科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products