Face feature extraction method, low-resolution face recognition method and device

An extraction method and face feature technology, applied in the field of artificial intelligence, can solve the problems of low resolution of face images and low accuracy of face image recognition, and achieve the effect of efficient follow-up processing and reducing the amount of parameters

Active Publication Date: 2022-06-28
一脉通(深圳)智能科技有限公司
View PDF4 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the direction of face recognition, for high-resolution face images, many neural network models (such as VGG, ResNet and MobileNet, etc.) have been able to achieve very high recognition accuracy, but in actual monitoring scenarios, in order to obtain a large enough For monitoring field of view, the camera is usually installed at a relatively high position, and the resolution of the face image captured by the camera is generally low. However, the accuracy of the existing model for low-resolution face image recognition is still low, and needs to be further improved

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face feature extraction method, low-resolution face recognition method and device
  • Face feature extraction method, low-resolution face recognition method and device
  • Face feature extraction method, low-resolution face recognition method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0056] according to figure 1 The shown structure builds a feature extraction network, in which the initial feature extraction module 31 is a 3*3 convolutional layer, the number of channels of the input and output images of the initial feature extraction module 31 are 3 and 64 respectively, and the length and width of the image remain unchanged. . The structure of GTFB module 32 is as follows figure 2 As shown, there are residual connections in the GTFB module 32, and the number of GTFB modules 32 is six. In GTFB module 32, the number of channels of input and output 3*3, 5*5 and deformable convolution feature maps are both 64, 3*3 convolution and deformable convolution output feature maps are added, and after 1* After 1 convolution and activation function, the output channel is a feature map of 64, the number of channels input to the 1*1 convolution layer at the tail of GTFB module 32 is 192, and the number of output channels is 64, so the feature map of GTFB module 32 is in...

Embodiment 2

[0063] On the basis of the network in Embodiment 1, a WSA attention module 5 is added. For the inside of the WSA attention module 5, the first feature map and the number of feature map channels output by the GTFB module 32 are reduced to 1 by using a 1*1 convolutional layer, and then the feature maps obtained after dimensionality reduction are spliced. The 1*1 convolutional layer is used to reduce the number of spliced ​​feature map channels to 1, and after activation by the sigmoid function, the spatial attention map is obtained.

[0064]Using the exact same training and testing conditions (including experimental methods and steps, hardware, framework, data set, optimizer, learning rate, etc.) as in Example 1, a comparative experiment was conducted. The results show that after adding the WSA attention module 5, For the 14*14 resolution FERET dataset, the recognition accuracy is 95.3%, and for the 28*28 resolution FERET dataset, the recognition accuracy is 97.1%. It shows tha...

Embodiment 3

[0066] On the basis of the network in Embodiment 2, a jump connection fusion module 6 is added, and the obtained feature extraction network structure is as follows Figure 4 shown. In this embodiment, the internal structure of the jump connection fusion module 6 is as follows Image 6 shown. After the first feature map and the second feature map are input into the jump connection fusion module 6, they are first spliced ​​in the channel direction to obtain the first jump connection feature map with 128 channels. On the other hand, the first feature map and the second feature map are respectively subjected to 3*3 convolution and ReLU activation function to generate the second hop-connected feature map and the third hop-connected feature map with 128 channels, respectively. Then, the first hop-connected feature map, the second hop-connected feature map, and the third hop-connected feature map are summed and fused through the elements, and after 3*3 convolution and ReLU activati...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a face feature extraction method, which includes acquiring a face image and a feature extraction network, using an initial feature extraction module and a GTFB module to extract features, using a bottleneck layer to fuse features, enlarging a feature map, and using a feature transformation subnet to perform feature extraction. Steps such as transformation; the amplification subnetwork and the feature transformation subnetwork are connected to form a network, and the GTFB module reuses important information on different scales to reduce the redundancy of useless information, which is conducive to fully extracting useful information for face recognition. The invention also discloses a low-resolution face recognition method and equipment. The low-resolution face recognition method includes steps such as feature extraction and vector matching. By efficiently extracting face features, the accuracy of low-resolution face recognition is improved. , to meet the needs of practical applications.

Description

technical field [0001] The invention belongs to the technical field of artificial intelligence, and in particular relates to a method for extracting facial features, a low-resolution facial recognition method and equipment. Background technique [0002] With the enhancement of computer computing power, deep learning technology has been gradually applied to various fields of artificial intelligence, and has achieved breakthrough results. In the direction of face recognition, for high-resolution face images, many neural network models (such as VGG, ResNet and MobileNet, etc.) have been able to achieve very high recognition accuracy, but in actual monitoring scenarios, in order to obtain a large enough For monitoring the field of view, the cameras are usually installed at relatively high positions, and the resolution of the face images obtained by the cameras is generally low, and the recognition accuracy of the existing models for low-resolution face images is still low, which...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/16G06V10/774G06V10/80G06V10/77G06V10/82G06K9/62G06T3/40G06N3/04G06N3/08
CPCG06T3/4053G06N3/08G06N3/045G06F18/213G06F18/253G06F18/214
Inventor 宋小青
Owner 一脉通(深圳)智能科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products