Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Face activity unit detection method based on multi-task learning

A multi-task learning and detection method technology, applied in the field of facial activity unit detection based on multi-task learning

Active Publication Date: 2020-10-16
UNIV OF SCI & TECH BEIJING
View PDF5 Cites 3 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The object of the present invention is to provide a kind of facial active unit detection method based on multi-task learning, can be used for realizing active unit (AU) detection task, and solve following problem: (1) adopt convolutional neural network (CNN) to learn facial feature information , and use the multi-level fusion strategy to combine the features learned by the low-level and high-level CNN, so that the network loses as little information as possible, and the learned facial features are more abundant; (2) through head pose estimation, landmark detection, gender Auxiliary tasks such as recognition and expression recognition enhance the performance of AU detection tasks. There is interaction between landmark detection and AU detection tasks, which improves the performance of both landmark detection tasks and AU detection tasks. Training auxiliary tasks can allow the network to learn more Multi-features, exploring the relationship between auxiliary tasks and exploration tasks can make the features more specific; (3) Using an online complex sample selection mechanism and a weighted loss function strategy to alleviate the impact of data imbalance; according to the training results as Assign weights to each sample to adjust the training process to achieve online difficult sample selection, and assign weights to each AU detection task loss function according to the ratio of AU positive and negative samples to achieve a weighted loss function; conduct extensive experiments on benchmark databases , to demonstrate the remarkable performance of our invention compared to state-of-the-art AU detection task techniques

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face activity unit detection method based on multi-task learning
  • Face activity unit detection method based on multi-task learning
  • Face activity unit detection method based on multi-task learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0076] In order to make the technical problems, technical solutions and advantages to be solved by the present invention clearer, the following will describe in detail with reference to the drawings and specific embodiments.

[0077] Embodiments of the present invention provide a method for detecting facial activity units based on multi-task learning, such as figure 1 As shown, the method includes the following steps:

[0078] Auxiliary task learning: the AlexNet network before the global average pooling layer is used as a shared structure to extract shared global facial features, and the extracted shared global facial features are sent to independent network structures related to tasks to obtain the output of auxiliary tasks; The output of the auxiliary tasks includes the output of landmark detection, gender recognition, head pose estimation and expression recognition;

[0079] Main task learning: Cut the face into the upper half face and the lower half face, and input them ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a face activity unit detection method based on multi-task learning. The method comprises the following steps: auxiliary task learning: taking an AlexNet network before a globalaverage pooling layer as a shared structure to extract shared facial global features, and respectively sending the extracted shared facial global features into independent network structures related to tasks to obtain the output of an auxiliary task; wherein the output of the auxiliary task comprises the output of mark point detection, gender recognition, head posture estimation and expression recognition; main task learning: cutting the face into an upper half face and a lower half face, respectively inputting the upper half face and the lower half face into the modified Resnet50 network to learn features related to the activity unit, fusing the shared face global features extracted in the auxiliary task learning step, and adding an attention mechanism; and feature combination: combiningthe outputs of the auxiliary tasks to serve as relationship information to correct the output of the related features of the activity unit. The invention relates to the technical field of human-computer interaction and pattern recognition.

Description

technical field [0001] The invention relates to the technical field of human-computer interaction and pattern recognition, in particular to a method for detecting facial activity units based on multi-task learning. Background technique [0002] The internationally renowned psychologist Paul Ekman proposed a facial expression coding system from an anatomical point of view, dividing facial muscles into several independent and interrelated action units (Action Unit, AU) to describe facial expressions subdivided. Using AU to describe expressions for expression recognition has two advantages: ①Most of the existing expression recognition work is based on six basic expressions (happy, sad, fear, anger, surprise and disgust), but human facial expressions are very Rich, not only these six basic expressions, using AU to describe expressions can combine more rich expressions; ②Using AU to describe and combine expressions can explore the connection between AU and expressions, which will...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/165G06V40/171G06N3/045G06F18/253
Inventor 支瑞聪周才霞
Owner UNIV OF SCI & TECH BEIJING
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products