Unlock instant, AI-driven research and patent intelligence for your innovation.

A Micro-Expression Recognition Method Based on Multi-task Learning for Representational au Region Extraction

A multi-task learning and region extraction technology, applied in character and pattern recognition, instruments, computing and other directions, can solve the problems of imbalance, large amount of calculation, high computing cost, and achieve the effect of increasing the number and improving the performance.

Active Publication Date: 2022-07-29
SHANDONG UNIV
View PDF0 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The AGACN proposed by Xie et al. combines AU and micro-expression labels, and models different AUs based on the muscle movement of the face and its relationship information. Due to the shortcomings of limited and unbalanced micro-expression training samples, Xie et al. A data enhancement method is proposed to effectively improve the performance of micro-expression recognition. However, due to the large number of AUs, the introduction of graph convolutional network needs to consider the relationship between multiple AU nodes, and the calculation amount is relatively large, which leads to the experiment low efficiency
Puneet Gupta proposed MERASTCMERASTC to alleviate the problem of easy over-fitting of micro-expressions, combine AU, key points and appearance features to encode the subtle deformation of micro-expression video sequences, and propose a new neutral face normalization method to speed up The efficiency of micro-expression recognition, but this method requires a neutral frame in the video sequence, so it has relatively large limitations
Lo et al. proposed MERGCN, which extracts AU features through a 3D convolutional neural network, and then uses a graph convolutional network to discover dependencies between AU nodes to help classify micro-expressions. This method uses all AUs to construct a graph convolutional network, without Select the most representative AU, making the computational cost higher

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Micro-Expression Recognition Method Based on Multi-task Learning for Representational au Region Extraction
  • A Micro-Expression Recognition Method Based on Multi-task Learning for Representational au Region Extraction
  • A Micro-Expression Recognition Method Based on Multi-task Learning for Representational au Region Extraction

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0122] A micro-expression recognition method for representative AU region extraction based on multi-task learning, including the following steps:

[0123] A. Preprocess the micro-expression video to obtain the image sequence containing the face region and its 68 key feature points;

[0124] B. According to 68 key feature points, obtain the position of the AU area, extract the optical flow characteristics in the AU area, set the number of representative AU areas, and obtain the most representative AU area;

[0125] C. Data set division, according to the subject's independent K-fold cross-validation method, the image sequence containing the face region obtained in step A is divided into a training set and a test set, and a micro-expression training set and a micro-expression test set are obtained;

[0126] D. The face image sequence processed in step A is sent to the AU mask feature extraction network model, the pixel-based cross entropy loss and the dice loss are calculated, an...

Embodiment 2

[0132] A micro-expression recognition method for extracting representative AU regions based on multi-task learning according to Embodiment 1, the difference is:

[0133] In step A, the micro-expression video is preprocessed, including framing, face key feature point detection, face cropping, TIM interpolation, and face scaling;

[0134] 1) Framing: according to the frame rate of the micro-expression video, the micro-expression video is divided into a sequence of micro-expression images;

[0135] 2) Detection of key feature points of face: Use Dlib vision library to detect 68 key feature points of micro-expression image sequences; such as eyes, nose tip, mouth corner points, eyebrows and contour points of various parts of the face, the detection effect is as follows figure 2 shown;

[0136] 3) Face cropping: Determine the position of the face frame according to the 68 key feature points of the positioned person;

[0137] In the horizontal direction, the center point of the cr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a micro-expression recognition method for extracting representative AU regions based on multi-task learning, comprising: A. preprocessing micro-expression videos; B. obtaining the position of the AU region to obtain the most representative AU region; C. Divide the training set and the test set; D. Train the AU mask feature extraction network model; E. Send the trained AU mask feature extraction network to obtain a face image sequence containing only representative AUs; F. The training contains 3D-ResNet network with non-local modules; G. Feed into the 3D-ResNet network containing non-local modules to get the classification accuracy. The present invention considers the contribution of different AUs to micro-expression recognition, also solves the problem of insufficient micro-expression samples, increases the number of training samples, and improves the micro-expression recognition performance.

Description

technical field [0001] The invention relates to a micro-expression recognition method for extracting representative AU regions based on multi-task learning, and belongs to the technical field of deep learning and pattern recognition. Background technique [0002] The research on facial emotions began with Charles Darwin, who pointed out the main rules of emotion production, introduced the external manifestations of different emotions and the relationship between emotions and the nervous system in detail, and laid the foundation for emotion research. When Ekman and Friesen observed a video of a patient with severe depression hiding suicidal intentions, they found a picture containing a desperate expression. The picture lasted only 2 / 25 of a second. Ekman and others named this short expression as a micro-expression. . [0003] Micro-expressions are different from ordinary macro-expressions. Micro-expressions are short-duration and unconscious facial expressions that reveal th...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06V40/16G06V10/764G06V10/46
CPCG06V40/176G06V40/168G06V40/172G06V10/462G06F18/214
Inventor 贲晛烨魏文辉韩民李梦雅贾文强李玉军
Owner SHANDONG UNIV
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More