Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Human face identity recognition method based on GB(2D)2PCANet depth convolution model

A deep convolution, identity recognition technology, applied in character and pattern recognition, neural learning methods, biological neural network models, etc.

Active Publication Date: 2016-06-29
慧镕电子系统工程股份有限公司
View PDF2 Cites 17 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention aims at the problems existing in the above-mentioned face recognition, and proposes a method based on GB(2D) 2 The face recognition method of the PCANet deep convolution model not only absorbs the advantages of the depth model and Gabor filtering, but also can extract more abstract features in the data, and is robust to factors such as illumination, expression, and occlusion, and overcomes convolution. Disadvantages of time-consuming neural network and large number of labels

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Human face identity recognition method based on GB(2D)2PCANet depth convolution model
  • Human face identity recognition method based on GB(2D)2PCANet depth convolution model
  • Human face identity recognition method based on GB(2D)2PCANet depth convolution model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] In order to better illustrate the purpose, specific steps and characteristics of the present invention, the present invention will be further described in detail below in conjunction with the accompanying drawings, taking the AR face library [5] as an example:

[0031] The present invention proposes a method based on GB(2D) 2 Face recognition method of PCANet deep convolution model, where GB(2D) 2 PCANet depth convolution model such as figure 1 shown. GB(2D) 2 PCANet consists of two feature extraction layers and a nonlinear output layer. The convolution filter of the feature extraction layer is composed of Gabor and (2D) 2 PCA learning is used to convolve the original input image to extract features, and the nonlinear output layer includes binary hash and local histogram calculation operations, which are used to further calculate the final features.

[0032] The present invention proposes a method based on GB(2D) 2 Face recognition method of PCANet deep convolution...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a human face identity recognition method based on a GB(2D)2PCANet depth convolution model.A model training method includes the following steps that preprocessed human face samples are sequentially fed into a first feature extraction layer, multiple sub-blocks are scanned from obtained Gabor feature images, mean removal is conducted, an optimal projection axis is extracted through (2D)2PCA and convoluted with a training set original sample, and a first layer of feature map is obtained; the first layer of feature map is fed into a second feature extraction layer, the steps are repeated, and a second layer of feature map is obtained; a feature map is output in a binarized mode, and local area histograms are calculated and spiced to serve as final features; the final features are fed into a linear SVM classifier, and an optimized human face identity recognition model is obtained.Effective feature expression can be automatically learnt, good locality is achieved, good robustness is achieved for illumination, expressions, noise and the like, and the recognition performance of human face identities is improved.

Description

Technical field: [0001] The invention belongs to the field of machine vision, in particular to a GB(2D) based 2 Face recognition method for PCANet deep convolutional model. Background technique: [0002] Face recognition technology is a technology that uses computers to analyze face videos or images, extract facial features from them, and identify identities through these features. [0003] At present, face recognition technology is developing rapidly, and a large number of research results have been obtained. Common face recognition algorithms can be divided into several categories: face recognition based on geometric features, face recognition based on subspace analysis, face recognition based on elastic matching, and face recognition based on hidden Markov model. Identity recognition, face recognition based on neural network and face recognition based on 3D. For example, Takatsugu et al. [1] used an elastic matching method based on dynamic link structure to locate the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/08
CPCG06N3/08G06V40/168G06F18/214
Inventor 蒋敏鹿茹茹孔军孙林胡珂杰王莉
Owner 慧镕电子系统工程股份有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products