Supercharge Your Innovation With Domain-Expert AI Agents!

Face continuous expression recognition method based on time sequence attention mechanism

A facial expression recognition and attention technology, which is applied in neural learning methods, character and pattern recognition, and facial feature acquisition/recognition. Stabilize, reduce jitter phenomenon, weaken the effect of error

Pending Publication Date: 2021-05-07
BEIJING NORMAL UNIVERSITY
View PDF10 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Aiming at this problem, the present invention proposes a facial continuous expression recognition method based on a temporal attention mechanism, which extracts the spatiotemporal salient features of the human face, improves the recognition accuracy, and solves the problem that the dependency relationship between frames cannot be accurately expressed, the training is unstable and the The problem of large output value jitter, the present invention also provides an improved 3D convolutional neural network model, the model has added time and space attention layer, the purpose is to better represent the dependence between the space-time position; the model uses The expected regression layer replaces the traditional single neuron regression layer, the purpose is to solve the problem of gradient instability and output jitter during the training process

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Face continuous expression recognition method based on time sequence attention mechanism
  • Face continuous expression recognition method based on time sequence attention mechanism
  • Face continuous expression recognition method based on time sequence attention mechanism

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] The technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only part of the embodiments of the present invention, not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0031] According to an embodiment of the present invention, such as Figure 5 As shown, a method for face continuous expression recognition based on temporal attention mechanism is proposed, which includes the following steps:

[0032] 1) Spatio-temporal feature extraction of image sequences:

[0033] In step 1-1), it is necessary to determine the input sequence length T and frame step size D. Then, assuming that the i-th frame is the target frame ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a face continuous expression recognition method based on a time sequence attention mechanism, which is characterized in that video frame feature extraction is carried out through a deep convolutional neural network, a space-time attention mechanism is combined, the description capability of features in a space domain is enhanced, the description of the features in a time domain is increased, and in a continuous-dimension emotion space, and sentiment prediction related to the spatio-temporal context is performed. Emotion change is a progressive process, and good continuous emotion recognition precision is difficult to obtain only through expression features of a single-frame spatial domain. The image frames close to the time domain generally have regularity, and time domain feature calculation can provide reliable data support for learning-based multi-frame fusion features. The method comprises the following steps: extracting a context dependency relationship of continuous multi-frame expressions in a video based on a pleasure-activation emotion space and a space-time attention mechanism, and learning a facial muscle movement rule in an expression generation process; and establishing a continuous expression recognition model. The method can be applied to the fields of criminal investigation, civil aviation safety detection and the like.

Description

technical field [0001] The invention relates to the field of artificial intelligence and computer human-computer interaction, in particular to a video-based continuous dimension emotion recognition method. Background technique [0002] With the deepening of artificial intelligence and emotional psychology research, it is possible to use computers to automatically identify and monitor facial micro-expressions in videos, and analyze emotional state behaviors. Moreover, emotional analysis of human faces in videos is widely used in criminal investigation, civil aviation safety testing, education and training. , special medical treatment and e-commerce have high application value. The emotional analysis of the suspect or the monitored person is helpful to assist criminal investigators to correctly identify the relationship between behavior and psychological emotion, and provide a basis for judgment. In the inquiry of the transportation of dangerous goods in civil aviation or the...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06V40/174G06N3/047G06F18/241G06F18/2415
Inventor 樊亚春程厚森税午阳
Owner BEIJING NORMAL UNIVERSITY
Features
  • R&D
  • Intellectual Property
  • Life Sciences
  • Materials
  • Tech Scout
Why Patsnap Eureka
  • Unparalleled Data Quality
  • Higher Quality Content
  • 60% Fewer Hallucinations
Social media
Patsnap Eureka Blog
Learn More