Mask robust face recognition network and method, electronic equipment and storage medium

A face recognition and network technology, applied in the field of computer vision, to achieve the effect of improving robustness

Inactive Publication Date: 2021-12-17
珠海亿智电子科技有限公司
View PDF7 Cites 1 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The purpose of the present invention is to provide a robust face recognition network, method, electronic equipment and storage medium

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Mask robust face recognition network and method, electronic equipment and storage medium
  • Mask robust face recognition network and method, electronic equipment and storage medium
  • Mask robust face recognition network and method, electronic equipment and storage medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0038] figure 1 It shows a schematic structural diagram of a mask robust face recognition network provided by Embodiment 1 of the present invention. For the convenience of description, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

[0039] The mask robust face recognition network 1 provided by the embodiment of the present invention includes a whole picture feature extraction network 11, a feature segmentation module 12 connected to the whole picture feature extraction network, a first branch network 13 and a first branch network 13 connected respectively with the feature segmentation module. Two-branch network14. Among them, the whole image feature extraction network is used to extract the shallow whole image features from the input face image, and the feature segmentation module is used to spatially segment the shallow whole image features according to the position of the preset segmentation point to obtain the a...

Embodiment 2

[0045] The embodiment of the present invention is based on embodiment one, figure 2 The implementation process of the mask robust face recognition network training method provided by the second embodiment of the present invention is shown. For the convenience of explanation, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

[0046] In step S201, a training data set is obtained, and the training data set includes a plurality of normal face images.

[0047] In the embodiment of the present invention, the basic data set can be obtained first, and the basic data set includes multiple normal face images. The basic data set can be image data selected from a general data set, such as Megaface, etc., which is not limited here. After obtaining the basic data set, perform key point detection on each face image in the basic data set, and align the detected key points with the standard key points. Specifically, the face key poin...

Embodiment 3

[0057] Embodiment 3 of the present invention is based on Embodiment 1, image 3 It shows the implementation process of the mask robust face recognition method provided by the third embodiment of the present invention. For the convenience of explanation, only the parts related to the embodiment of the present invention are shown, and the details are as follows:

[0058]In step S301, shallow whole image features are extracted from the input face image.

[0059] In the embodiment of the present invention, the input face image is an aligned face image. Before inputting the face image, the acquired face image can be aligned with the preset face key points (for example, five face key points of eyes, nose, left and right mouth corners), and the aligned face image can be input into In the trained face recognition network. The trained face recognition network can be trained using the method described in Embodiment 2.

[0060] In step S302, the shallow whole image features are spatia...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention is suitable for the field of computer vision, and provides a mask robust face recognition network and method, electronic equipment and a storage medium, the face recognition network comprises a whole image feature extraction network, a feature segmentation module, a first branch network and a second branch network, the whole image feature extraction network is used for extracting shallow whole image features from an input face image, the feature segmentation module is used for performing spatial segmentation on the shallow whole image features according to positions of preset segmentation points to obtain upper and lower shallow features, the first branch network is used for extracting upper half advanced features of upper half shallow features, the upper half advanced features are used for mask face recognition, the second branch network is used for extracting the lower half advanced features of the lower half superficial layer features, the lower half advanced features are used for being spliced with the upper half advanced features, and the spliced full features are used for normal face recognition, so that the robustness of the face recognition network to mask faces is improved.

Description

technical field [0001] The invention belongs to the field of computer vision, and in particular relates to a mask robust face recognition network, method, electronic equipment and storage medium. Background technique [0002] Face recognition technology is widely used in the field of biometrics. Compared with other biometric methods, such as fingerprints and irises, face recognition technology has the characteristics of non-contact and easy collection. Today, with the increasing development of deep learning theory, for the occlusion recognition problem, methods based on artificial, occlusion detection, and segmentation are selected to obtain the occlusion area in the face image in advance, and then feature extraction is performed on the non-occlusion area; the method based on direct feature extraction is Various loss functions are used to enforce constraints on the distance between occluded images and non-occluded images; image reconstruction-based methods usually use gener...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06N3/04G06N3/08
CPCG06N3/08G06N3/045
Inventor 殷绪成李凯杨春
Owner 珠海亿智电子科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products