Facial expression action unit adversarial synthesis method based on local attention model

An attention model, facial expression technology, applied in the field of computer vision and affective computing, can solve the problems of loss of details, complexity, lack of authenticity, etc., to achieve the effect of improving accuracy

Pending Publication Date: 2019-11-15
TIANJIN UNIV
View PDF3 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Due to the complexity of facial AU annotation, and it is easily affected by different face shapes, different expressions, different lighting and facial postures, the synthesis of facial expression motion units based on

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Facial expression action unit adversarial synthesis method based on local attention model
  • Facial expression action unit adversarial synthesis method based on local attention model
  • Facial expression action unit adversarial synthesis method based on local attention model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] The present invention performs feature extraction on the AU area of ​​the facial motion unit through the local attention model, then performs confrontation generation on the local area, and finally evaluates the AU strength on the enhanced data to detect the quality of the generated image.

[0037] Concrete steps of the present invention are as follows:

[0038] First, the local attention model is used to extract the features of the local area of ​​the face, and the AU feature distribution extracted by the local attention model is learned and modeled based on the conditional adversarial self-encoding model to form a human face after removing the original AU strength. Face feature vector; then, add the label information of the specified AU strength to the feature layer of the self-encoder generator, so as to generate the image of the corresponding AU strength, so as to achieve the effect of changing the AU strength, so as to establish facial motion units with different comb...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention relates to the fields of computer vision, emotion calculation and the like. In order to improve the diversity and the universality of an AU expression data set are improved, a technicalscheme adopted by the invnetion includes a facial expression action unit adversarial synthesis method based on a local attention model, and the method comprises the steps: carrying out the feature extraction of a facial motion unit AU region through the local attention model, carrying out the adversarial generation of a local region, and finally carrying out the AU intensity evaluation of enhanceddata, so as to detect the quality of a generated image. The method is mainly applied to occasions such as image processing and face recognition.

Description

Technical field: [0001] The present invention relates to the fields of computer vision and affective computing, and in particular to a method for synthesizing human facial expression motion units based on the combination of Conditional Adversarial Autoencoder (CAAE) and Local Attention Model (Local AttentionModel). It can be widely used in applications such as data enhancement and improving the accuracy of facial expression recognition models. Background technique: [0002] Facial expressions can reveal people's inner activities, mental state, and social behaviors that are communicated outwards. With the development of artificial intelligence, human-centered facial expression recognition has gradually attracted widespread attention in the industry and academia. In the field of facial expression analysis, there are two main ways to label human expressions: labeling based on emotional categories (such as joy, anger, etc.) and labeling based on the Facial Action Coding System ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/00G06T7/00G06T11/00
CPCG06T7/0002G06T11/00G06T2207/30168G06T2207/30201G06V40/171G06V40/174G06V40/172
Inventor 刘志磊张翠翠刘迪一
Owner TIANJIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products