Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Video expression recognition method based on deep residual attention network

An expression recognition and attention technology, applied in neural learning methods, character and pattern recognition, biological neural network models, etc., can solve the problem of not taking into account the difference in the intensity of emotional representation of face images.

Pending Publication Date: 2020-10-20
TAIZHOU UNIV
View PDF1 Cites 11 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] The present invention aims to overcome the technology that the video expression recognition in the prior art does not take into account the difference in the intensity of emotion representation in each local area in the face image, and does not take into account the semantic gap between manual features and subjective emotions in the video The problem is to provide a video expression recognition method based on the deep residual attention network, which is realized by using the spatial attention mechanism. The weight of the spatial distribution is generated for the input feature map, and then weighted and summed with the feature map to supervise the network. Learning to assign different attention (weights) to different regions closely related to expression in the face image can focus on the feature learning of the target region closely related to expression in the face image, thereby improving the feature representation ability of the deep residual network , to further improve the performance of video expression recognition

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Video expression recognition method based on deep residual attention network
  • Video expression recognition method based on deep residual attention network
  • Video expression recognition method based on deep residual attention network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0048] Embodiment 1: A kind of video expression recognition method based on depth residual attention network of this embodiment, such as figure 1 shown, including the following steps:

[0049] S1. Perform video data preprocessing on video samples;

[0050] Step S1 comprises the following steps:

[0051] S1.1, first for each video sample, filter out the image frame of the peak intensity (apex) period;

[0052] S1.2, adopt haar-cascades detection model to carry out face detection; The face detection in the step S1.2 comprises the following steps:

[0053] Step 1. First convert the input image into a grayscale image to remove color interference;

[0054] Step 2. Set the size of the search face frame, search for faces in the input image in turn, find the faces and save them after intercepting;

[0055] Step 3. According to the standard distance between the two eyes, cut out images containing key expression parts such as mouth, nose, and forehead from the original facial expres...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a video expression recognition method based on a deep residual attention network. The method comprises the following steps: S1, performing video data preprocessing on a video sample; S2, performing facial expression feature extraction on the face image by adopting a deep residual attention network; and S3, carrying out certain processing on the features extracted in the step S2, then carrying out training and testing, and outputting a final classification result of the facial expressions. The method is realized by adopting a spatial attention mechanism, weights in spatial distribution are generated for an input feature map, and thenweighted summation are performed with the feature map, so network learning is supervised to allocate different attention (weights) to different areas closely related to expressions in the face image, feature learning of a target area closely related to expressions in the face image can be focused, the feature representation capabilityof the deep residual network is improved, and the performance of video expression recognition is further improved.

Description

technical field [0001] The invention relates to the technical fields of image processing and pattern recognition, in particular to a video expression recognition method based on a deep residual attention network. Background technique [0002] The communication between people is full of emotion, the expression of emotion is the most primitive instinct of human beings, and the basic element of emotion is the aggregate of various expressions. In the past, people recorded their lives through words or photos. Now most of them record important memories and emotional expressions, such as emotions, emotions, and emotions, in the form of video blogs and short videos. [0003] Feature extraction is an important part of video expression recognition. In the early video expression recognition, most researchers used manual features for the classification of video expressions. Among them, representative manual features mainly include: Local Binary Pattern (LBP), Local Phase Quantization...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/084G06V40/168G06V40/174G06N3/045G06F18/214
Inventor 赵小明张石清
Owner TAIZHOU UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products