Micro-expression recognition method for representative AU region extraction based on multi-task learning

A multi-task learning and region extraction technology, applied in character and pattern recognition, instruments, computing, etc., can solve problems such as imbalance, large amount of calculation, easy overfitting of micro-expressions, and achieve the effect of increasing the number and improving performance

Active Publication Date: 2021-08-06
SHANDONG UNIV
View PDF7 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The AGACN proposed by Xie et al. combines AU and micro-expression labels, and models different AUs based on the muscle movement of the face and its relationship information. Due to the shortcomings of limited and unbalanced micro-expression training samples, Xie et al. A data enhancement method is proposed to effectively improve the performance of micro-expression recognition. However, due to the large number of AUs, the introduction of graph convolutional network needs to consider the relationship between multiple AU nodes, and the calculation amount is relatively large, which leads to the experiment low efficiency
Puneet Gupta proposed MERASTCMERASTC to alleviate the problem of easy over-fitting of micro-expressions, combine AU, key points and appearance features to encode the subtle deformation of micro-expression video sequences, and propose a new neutral face normalization method to speed up The efficiency of micro-expression recognition, but this method requires a neutral frame in the video sequence, so it has relatively large limitations
Lo et al. proposed MERGCN, which extracts AU features through a 3D convolutional neural network, and then uses a graph convolutional network to discover dependencies between AU nodes to help classify micro-expressions. This method uses all AUs to construct a graph convolutional network, without Select the most representative AU, making the computational cost higher

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Micro-expression recognition method for representative AU region extraction based on multi-task learning
  • Micro-expression recognition method for representative AU region extraction based on multi-task learning
  • Micro-expression recognition method for representative AU region extraction based on multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0122] A micro-expression recognition method for extracting representative AU regions based on multi-task learning, comprising the following steps:

[0123] A, the micro-expression video is preprocessed to obtain an image sequence comprising a face area and 68 key feature points thereof;

[0124] B. According to the 68 key feature points, obtain the position of the AU area, extract the optical flow characteristics in the AU area, set the number of representative AU areas, and obtain the most representative AU area;

[0125] C, data set division, according to the independent K-fold cross-validation method of the subject, the image sequence comprising the face region obtained in step A is divided into a training set and a test set to obtain a micro-expression training set and a micro-expression test set;

[0126] D. Send the face image sequence processed in step A into the AU mask feature extraction network model, calculate pixel-based cross-entropy loss and dice loss, and train...

Embodiment 2

[0132] According to the micro-expression recognition method for extracting a representative AU region based on multi-task learning described in Embodiment 1, the difference is that:

[0133] In step A, the micro-expression video is preprocessed, including framing, face key feature point detection, face cropping, TIM interpolation, and face scaling;

[0134] 1) Framing: divide the micro-expression video into micro-expression image sequences according to the frame rate of the micro-expression video;

[0135] 2) Face key feature point detection: use the Dlib vision library to detect 68 key feature points of the micro-expression image sequence; such as eyes, nose tip, mouth corner points, eyebrows, and contour points of various parts of the face. The detection effect is as follows: figure 2 shown;

[0136] 3) Face cropping: determine the position of the face frame according to the 68 key feature points of the positioned person;

[0137] In the horizontal direction, the midpoint ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to a micro-expression recognition method for representative AU region extraction based on multi-task learning. The method comprises the following steps: A, preprocessing a micro-expression video; B, acquiring the position of the AU region to obtain the most representative AU region; C, dividing a training set and a test set; D, training an AU mask feature extraction network model; E, sending the face image to the trained AU mask feature extraction network to obtain a face image sequence only containing representative AU; F, training a 3D-ResNet network comprising a non-local module; G, sending to a 3D-ResNet network containing a non-local module to obtain classification accuracy. According to the method, contributions of different AUs to micro-expression recognition are considered, the problem of insufficient micro-expression samples is solved, the number of training samples is increased, and the micro-expression recognition performance is improved.

Description

technical field [0001] The invention relates to a micro-expression recognition method for extracting representative AU regions based on multi-task learning, and belongs to the technical field of deep learning and pattern recognition. Background technique [0002] The study of facial emotions began with Charles Darwin, who pointed out the main rules of emotion generation, introduced in detail the external manifestations of different emotions and the relationship between emotions and the nervous system, and laid the foundation for emotion research. When Ekman and Friesen observed a video of a patient with severe depression hiding his suicide intentions, they found a picture containing a desperate expression, which lasted only 2 / 25 of a second, so Ekman and others named this short expression micro-expression . [0003] Micro-expressions are different from ordinary macro-expressions. Micro-expressions are short-duration and unconsciously made facial expressions that reveal the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62
CPCG06V40/176G06V40/168G06V40/172G06V10/462G06F18/214
Inventor 贲晛烨魏文辉韩民李梦雅贾文强李玉军
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products