Spatial coherence feature-based quick identification method for human face expression of any pose

A facial expression recognition and gesture technology, applied in the field of emotion recognition, can solve the problems of reducing the recognition rate of the model and lack of spatial coherence features, and achieve the effect of improving the recognition rate and accuracy.

Inactive Publication Date: 2017-05-31
JIANGSU UNIV
View PDF3 Cites 24 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] (1) In the paper of S.Eleftheriadi et al., entitled "Discriminative Shared Gaussian Processes for Multiview and View-Invariant Facial Expression Recognition", a discriminative Gaussian process hidden variable model is used for multi-pose facial expression recognition, but This method uses traditional manual features, which are not robust to object occlusion, face deformation and posture changes.
(2) In the paper titled "Joint Fine-Tuning in Deep Neural Networks for FacialExpression Recognition" by H.Jung et al., the face features learned by cascaded convolutional neural network and the geometric features of key areas of the face The method has obtained good expression recognition results, but the method is based on a complete face when learning features through the convolutional neural network, which makes the combined features learned by it different from those learned by the feature learning method. Has spatially coherent features, which reduces the recognition rate of the model

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Spatial coherence feature-based quick identification method for human face expression of any pose
  • Spatial coherence feature-based quick identification method for human face expression of any pose
  • Spatial coherence feature-based quick identification method for human face expression of any pose

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0033] The present invention firstly normalizes the pose of the original image, synthesizes the front face image corresponding to the face image of any pose, and then performs preprocessing on the synthesized front face image, including image grayscale and image size normalization. Then, the preprocessed front face image is sampled through the tree model for key areas, and the unsupervised feature learning method autoencoder is trained based on the key areas that have been sampled, and the mapping relationship between input features and output features is learned. This mapping relationship is It is obtained by continuously updating the reconstruction error function between the input feature and the output feature. When the reconstruction error function value tends to converge, the function stops. At this time, the weight and bias of the function constitute the final mapping relationship. Use this mapping relationship for all synthesized front face images to obtain unified front...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a spatial coherence feature-based quick identification method for a human face expression of any pose. The method comprises the steps of firstly, synthesizing a front face image corresponding to a human face image of any pose; secondly, detecting 51 key feature points based on the synthesized front face image, and extracting key regions by taking the feature points as centers; thirdly, performing quick unsupervised feature learning based on the key regions; and finally, performing convolution sum pooling by taking each key region as a unit to obtain unsupervised feature learning-based high-level features, obtaining a spatial coherence feature used for identifying the human face expression of any pose in combination with the high-level feature and a geometric position feature of each key region, and inputting the spatial coherence feature to an SVM for training to obtain a unified expression identification model, thereby finishing the identification of the human face expression of any pose. According to the method, the problems of low identification rate due to the fact that conventional features do not have a spatial constraint relationship, low efficiency due to the fact that a model needs to be built for each pose in conventional multi-pose human face expression identification, and the like are solved, so that the identification accuracy and efficiency are effectively improved.

Description

technical field [0001] The invention belongs to the field of emotion recognition, and in particular relates to a method and system for fast recognition of human facial expressions in arbitrary poses based on spatial coherence features. Background technique [0002] Facial expression recognition is an important research direction in the fields of pattern recognition, human-computer interaction and computer vision, and has become a research hotspot at home and abroad. Generally speaking, the six most common basic human expressions are happiness, sadness, anger, surprise, disgust and fear. In recent years, with the continuous proposal of various features that are robust to poses, the development of multi-pose automatic facial expression recognition technology has been promoted. For example, the traditional face recognition model can only perform facial expression recognition based on pictures of frontal or near-frontal faces, but the recognition effect on facial expressions of...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62
CPCG06V40/171G06V40/174G06F18/217G06F18/2411
Inventor 毛启容张飞飞许国朋詹永照苟建平王良君
Owner JIANGSU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products