Multi-sample facial expression recognition method based on low-rank tensor decomposition

A tensor decomposition and expression recognition technology, applied in the field of facial expression recognition, can solve the problems that expressions are easily affected by different individuals, and it is difficult to preserve the nonlinear characteristics of expressions, so as to improve the recognition rate of facial expressions and the ability to express them. Effect

Active Publication Date: 2019-11-29
6 Cites 6 Cited by

AI-Extracted Technical Summary

Problems solved by technology

That is, the same kind of expression is easily affected by different individuals, and in the research of practical application, the change information of expression features has two c...
View more


The invention provides a multi-sample facial expression recognition method based on low-rank tensor decomposition. The multi-sample facial expression recognition method comprises an image preprocessing step, a feature extraction step, a tensor modeling step, a low-rank learning step, a tensor decomposition step and a feature classification step. According to the multi-sample facial expression recognition method, tensor representation feature space is utilized to reserve nonlinear features of an image; and face sub-space area features of different individuals are learned through a low-rank tensor decomposition technology, and face information under different dimensions is obtained, and tensors under all sub-spaces are decomposed, clustered and reconstructed to obtain effective expression features, and the expression information expression capacity is higher, so that the face expression recognition rate is increased.

Application Domain

Acquiring/recognising facial features

Technology Topic

Expression FeatureMethod of undetermined coefficients +3


  • Multi-sample facial expression recognition method based on low-rank tensor decomposition
  • Multi-sample facial expression recognition method based on low-rank tensor decomposition
  • Multi-sample facial expression recognition method based on low-rank tensor decomposition


  • Experimental program(1)

Example Embodiment

[0016] In order to make the objectives, technical solutions, and advantages of the present invention clearer, the present invention will be further described in detail below in conjunction with the embodiments and the drawings.
[0017] The present invention proposes a reference to the overall framework schematic diagram of a diverse face expression recognition method based on low-rank tensor decomposition figure 1 Shown.
[0018] The present invention provides a facial expression recognition method, which includes the following steps:
[0019] Perform steps S1 to S5 on the sample set and test set:
[0020] S1: Image preprocessing, using the face detection algorithm to intercept the face area in the image;
[0021] S2: Feature extraction, feature extraction of facial expression images through feature operators in multiple modes;
[0022] S3: Tensor modeling, according to the extracted operator features of the face region, construct a tensor model based on the operator and the experimental object;
[0023] S4: Low-rank learning, low-rank learning is performed on the expression sample space represented by the tensor to obtain a low-rank matrix under the subspace;
[0024] S5: Tensor decomposition, decomposing and clustering the low-rank samples in the subspace to obtain projection matrices in different dimensions;
[0025] S6: Feature classification, which compares and classifies the features extracted after the projection matrix mapping.
[0026] Among them, step S1 includes the following steps:
[0027] S11: Convert the image from color space to gray space;
[0028] S12: Use the face detection algorithm to intercept the face area;
[0029] S13: Scale the image of the face area.
[0030] Step S2 includes the following steps:
[0031] S21: Extract local binary mode LBP operator features from facial expression images;
[0032] S22: Extract Gabor operator features from the facial expression image.
[0033] The face detection methods and operator feature extraction methods in steps S1 and S2 are all common methods in the prior art, and the present invention is not limited.
[0034] The tensor modeling method is to find the factors that affect the effect of facial expression recognition, such as different individuals and different operator modes, so as to find the physical meaning represented by each coordinate axis of the tensor. The subsequent low-rank model is mainly implemented around the feature sample subspace. Finally, the tensor decomposition is used to achieve effective extraction of the dimension related to expressions. Step S3 includes the following steps:
[0035] S31: Stretch the features extracted by the feature operators in various modes in step S2 into vectors and standardize them to the same length, and then cascade them to form a feature matrix with the dimension of the operator channels;
[0036] S32: Using face identity and expression attributes as new coordinate axes, stack the feature matrices of different face identities and expressions on the newly established coordinate axis to establish a feature space tensor
[0037] Where I 1 Represents the characteristic dimension, I 2 Represents the dimension of the channel through which LBP and Gabor operators extract features, I 3 Represents the face identity dimension, I 4 Represents the dimension of expression information. Only the first 3 dimensions will be processed later. The face corresponds to a specific person, and the face identity here represents the identity tag of the person.
[0038] In the present invention, the low-rank model uses the rank information of the matrix as a method of minimizing the regularity of the rank function of the sparse measure around the sample subspace, which can effectively describe the data subspace and the noise part.
[0039] Step S4 includes the following steps:
[0040] S41: Use feature space tensor Construct a low-rank model of the feature subspace and noise part, such as figure 2 Shown:
[0042] among them, Is the feature space tensor, rank represents the rank of the target matrix to be solved, Z represents the low-rank reconstruction matrix describing the feature subspace, and λ represents the penalty parameter used to balance low-rank terms and sparse error terms, ‖·‖ 2,1 Means l 2,1 Norm, E is the sparse noise part, D is the dictionary in the Zhang Cheng space of the feature;
[0043] S42: The convex optimization of the low-rank model in S41 is:
[0045] Dictionary use tensor in linear tensor space replace,‖·‖ * Represents the nuclear norm;
[0046] S43: For high-order tensor data, the low-rank model needs to consider subspace representation methods in different dimensions. The low-rank model after convex optimization uses subspace representation in different dimensions, and the low-rank matrix in the face identity dimension subspace uses tensor representation:
[0050] E=[E (1); E (2);...;E (N) ]
[0051] among them, Respectively Z, E are the matrix data under the face identity dimension subspace, N is the dimension number of face identity, and Ψ(·) represents the low-rank matrix Z under the face identity dimension subspace (n) Stack to get low rank tensor
[0052] S44: Iteratively optimize the model in S43 by augmented Lagrangian alternating multiplier method to obtain a low-rank tensor Use low rank tensors The low-rank matrix Z under the face identity dimension subspace in (n) For the feature space tensor in S3 The feature space tensor obtained by clustering reconstruction coding Low-rank representation
[0053] Step S5 includes the following steps:
[0054] S51: High-order tensor represented by low-rank representation in S4 Construct a tensor Tucker decomposition model, such as image 3 Shown:
[0056] Among them, × n Tensor mode n multiplication, Is the projection matrix on the feature data, operator channel, and face identity dimensions; As the core tensor, it retains the main information of the original tensor, represents the interaction in each dimension, and has a certain degree of stability;
[0057] S52: Use non-negative Tucker decomposition to convert non-negative tensors Decomposition, the decomposition can obtain the projection matrix in each dimension by solving the optimization problem, and the optimization problem is as follows:
[0060] ‖·‖ 2 Means l 2 Norm, optimization can get projection matrix U in each dimension n;
[0061] S53: The projection matrix U in each dimension obtained according to S52 n Feature space tensor for low-rank representation Perform dimensionality reduction in each dimension to extract effective representation.
[0062] After obtaining the projection matrices of the face images in the sample set and the test set in different dimensions, proceed to step S6, and step S6 includes the following steps:
[0063] S61: Calculate the feature distance extracted from the corresponding face region of the sample face i and the test face j as:
[0065] T Transpose the matrix;
[0066] S62: Obtain the similarity between the overall faces according to the feature distance to classify the test face.
[0067] The facial expression recognition algorithm of the present invention tests the seven expressions of different individuals contained in the JAFFE facial expression library on the premise of high-order tensors, and the average recognition rate reaches 92.3%, while the average recognition rate of the current similar methods is 89.5 %; Under the premise of different subjects, the facial expression sequence contained in the CK+ face database was tested, and the average recognition rate reached 93.4%, while the current average recognition rate of similar methods was 91.6%. It can be seen that the face recognition method of the present invention improves the accuracy of facial expression recognition under different individuals, and has a better prospect of popularization and application.


no PUM

Description & Claims & Application Information

We can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.

Similar technology patents

Community discovery method based on nonlinear non-negative matrix factorization

PendingCN113902091Aimprove performanceStrong expressive ability

User risk assessment method and device, electronic equipment and storage medium

PendingCN114820210AStrong expressive abilityReduce the impact of noise

Link prediction method of graph attention network based on node similarity

PendingCN114679372AStrong expressive ability

Multi-dimensional expansion prediction method and device for non-stationary time series data

PendingCN111651935AStrong expressive ability

License plate classification and recognition method based on Gabor feature auto-encoder

PendingCN110751122AStrong expressive abilityImprove recognition accuracy

Classification and recommendation of technical efficacy words

  • Strong expressive ability

Motion synthesizing and editing method based on motion capture data in computer bone animation

InactiveCN102945561AStrong expressive ability

Operation action identification method and device

InactiveCN108171134AStrong expressive abilityclear structure

Industrial part key point detection method based on deep learning

ActiveCN110705563AImprove robustnessStrong expressive ability

Coal and gas outburst strength prediction method based on deep learning

PendingCN112183901AStrong expressive abilityExcellent mapping effect

Community discovery method based on nonlinear non-negative matrix factorization

PendingCN113902091Aimprove performanceStrong expressive ability
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products