Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal emotion recognition method based on fusion attention network

An emotion recognition and multi-modal technology, applied in character and pattern recognition, special data processing applications, instruments, etc., can solve problems such as not considering the importance of state information, and achieve the effect of improving accuracy

Active Publication Date: 2019-08-30
ZHEJIANG UNIV OF TECH
View PDF4 Cites 44 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The irrationality of the above three methods of linear extraction and fusion of feature vectors lies in the weight consistency of each mode during multi-modal fusion, that is, selecting special state information from the state information output as the encoded state information, and only considering any one The state information itself will have a certain impact on the final emotional intensity output, without considering the importance of each state information is not the same

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal emotion recognition method based on fusion attention network
  • Multi-modal emotion recognition method based on fusion attention network
  • Multi-modal emotion recognition method based on fusion attention network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0026] The present invention will be further described below in conjunction with drawings and embodiments.

[0027] refer to figure 1 and figure 2 , a multimodal emotion recognition method based on a fusion attention network, comprising the following steps:

[0028] Step 1, extracting high-dimensional features of the three modalities of text, vision and audio, the process is:

[0029] Extract text features as where T l is the number of words in the opinion speech video, in this embodiment, T l =20,l t Represents 300-dimensional Glove word embedding vector features; FACET visual features are extracted using the FACET facial expression analysis framework as Among them, T v is the total number of frames of the video, and the p visual features extracted at the jth frame are In this embodiment, p=46; use the COVAREP acoustic analysis framework to extract the COVAREP audio features as Among them, T a is the segmented frame number of the audio, and the q acoustic featur...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal emotion recognition method based on a fusion attention network. The method comprises: extracting high-dimensional features of three modes of text, vision and audio, and aligning and normalizing according to the word level; then, inputting the signals into a bidirectional gating circulation unit network for training; extracting state information output by the bidirectional gating circulation unit network in the three single-mode sub-networks to calculate the correlation degree of the state information among the multiple modes; calculating the attention distribution of the plurality of modalities at each moment; wherein the state information is the weight parameter of the state information at each moment; and weighting and averaging state information ofthe three modal sub-networks and the corresponding weight parameters to obtain a fusion feature vector as input of the full connection network, a to-be-identified text, inputting vision and audio intothe trained bidirectional gating circulation unit network of each modal, and obtaining final emotion intensity output. According to the method, the problem of weight consistency of all modes during multi-mode fusion can be solved, and the emotion recognition accuracy under multi-mode fusion is improved.

Description

technical field [0001] The present invention relates to the fields of text processing, audio processing, visual processing, feature extraction, deep learning, cyclic neural network, emotion recognition, etc., and in particular relates to a multi-modal emotion recognition method. Background technique [0002] Emotion recognition is a research hotspot in the field of natural language processing. The main challenge of emotion recognition is to be able to conduct continuous and real-time analysis of the speaker's emotion. Multimodal emotion recognition research has made great progress in a variety of tasks and has become an emerging research field of artificial intelligence. Recognizing human emotions using information such as human facial expressions, voice intonation, and body gestures is an interesting and challenging problem. In the research of multimodal emotion recognition involving video, text, vision and audio are often used as the main multimodal information. The purpo...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06F17/27G06K9/62
CPCG06F40/205G06F18/24
Inventor 宦若虹鲍晟霖葛罗棋谢超杰
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products