Facial expression motion unit identification method based on space-time diagram convolutional network

A technology of convolutional network and motion unit, which is applied in the fields of computer vision, emotional computing, emotion recognition, and human-computer interaction. It can solve problems such as overfitting, small data samples, and data sets that cannot meet the needs of detection. Poor stickiness, the effect of improving accuracy

Inactive Publication Date: 2021-04-09
TIANJIN UNIV
View PDF2 Cites 4 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

But at the same time, AU detection still faces many challenges. In actual application scenarios, AU data has high complexity. Factors such as head posture, occlusion, and complex lighting will cause a significant decline in the performance of the AU recognition model.
In addition, due to individual differences in race, skin color, age, gender, etc., there are huge intra-class differences in AU recognition, which will also significantly affect the accuracy of AU recognition.
At the same time, because the labeling of AU requires professional personnel to spend a lot of time to complete, the data set available for training is far from meeting the needs of various race detection in complex scenarios, and the data sample is small, which is prone to overfitting
[0005] Most of the existing AU detection models only consider the correlation between different AUs at the same time point, while ignoring the correlation of space and time

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial expression motion unit identification method based on space-time diagram convolutional network
  • Facial expression motion unit identification method based on space-time diagram convolutional network
  • Facial expression motion unit identification method based on space-time diagram convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0042] The present invention extracts features from the AU region of the facial motion unit through an autoencoder, then constructs a spatio-temporal relation graph of the AU sequence based on the AU spatio-temporal relation, and finally performs a graph convolution operation on the AU spatio-temporal relation graph using the spatio-temporal graph convolution network model, and uses AU identification is performed by a fully-connected network to detect the occurrence and strength of AUs.

[0043] Concrete steps of the present invention are as follows:

[0044] First, the input image frame sequence is segmented and the AU local region (ROI) in each frame image is extracted. Use autoencoder to extract deep features of facial AU key region (ROI).

[0045] Next, the depth representation vector of the AU extracted in the previous step is used as a node to construct an undirected spatio-temporal relationship graph of the AU sequence. Connect nodes in space and time according to the...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a facial expression motion unit identification method based on a space-time diagram convolutional network. The method comprises the steps: performing feature extraction on a facial motion unit AU area through a convolutional auto-encoder, then constructing a spatio-temporal relationship diagram of an AU sequence according to the AU spatio-temporal relationship closeness degree, and finally performing AU identification based on ST-GCN. According to the method, the facial motion unit is recognized based on the space-time diagram convolutional network, modeling is carried out on the space-time dependence relationship between the AUs by using the undirected space-time diagram model, and learning of AU deep representation features is carried out by using the space-time diagram convolutional network, so that the AU recognition accuracy is improved. The method can effectively solve the problems of poor robustness, low accuracy and the like of an AU detection model, and can be widely applied to expression analysis, emotion calculation and man-machine interaction application.

Description

technical field [0001] The present invention relates to the technical fields of computer vision and affective computing, and in particular to a human facial expression movement unit recognition (ActionUnit, AU) based on a Spatial-Temporal Graph Convolutional Networks (ST-GCN). It is widely used in emotion recognition, human-computer interaction and other fields. Background technique [0002] Facial expressions can reveal people's inner activities, mental state, and social behaviors that are communicated outwards. With the development of artificial intelligence, human-centered facial expression recognition has gradually attracted widespread attention in the industry and academia. Expression analysis using facial motion coding system is one of the common methods for facial expression recognition. [0003] The Facial Action Coding System (FACS) divides the human face into 44 facial movement units according to the movement of muscles from an anatomical point of view, which is ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/32G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/176G06V10/25G06N3/045G06F18/214G06F18/2411
Inventor 刘志磊张庆阳董威龙陈浩阳都景舜
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products