Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature-level fusion method for multi-modal emotion detection

A feature-level fusion and multi-modal technology, applied in character and pattern recognition, special data processing applications, biological neural network models, etc., to achieve the effect of simple operation, stable effect and fast operation speed

Inactive Publication Date: 2019-12-13
ZHEJIANG UNIV OF TECH
View PDF2 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] Traditional emotion detection methods only use a single visual or audio signal, which has certain limitations

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature-level fusion method for multi-modal emotion detection
  • Feature-level fusion method for multi-modal emotion detection

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0039] The present invention will be described in detail below in conjunction with the accompanying drawings, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, so as to define the protection scope of the present invention more clearly.

[0040] refer to figure 1 and figure 2 , a feature-level fusion method for multimodal emotion detection, comprising the following steps:

[0041] Step 1: Obtain the transcript of its text form from the public IEMOCAP multimodal dataset. The transcript S is a sentence composed of n words, that is, S=[w 1 ,w 2 ,...,w n ];

[0042] Step 2: According to the existing fast text embedding dictionary, the initial one-hot vector word W with dimension V i Embed into a low-dimensional real-valued vector to obtain a vector sequence X;

[0043] By formula: Embedding the word, transforming the sentence S into a vector sequence X=[x 1 ,x 2 ,...x n ];

[0044] Step 3: Apply a si...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a feature-level fusion method for multi-modal emotion detection. The method comprises the following steps: obtaining a transcript in a text form from a public data set; applying a single-layer CNN on the vector sequence X; extracting audio information from an audio file in the data set by using an open source tool openSMILE, and mapping a high-dimensional vector into a dense neural layer to obtain an audio feature vector; using the multi-dimensional self-attention as a feature fusion method for calculating the attention score probability of the single-peak feature; performing a weighted addition using the attention score probability to create a fusion vector; mapping the generated fusion vector su to another dense neural layer; calculating the classification probability of the fusion vector by using a softmax function; and calculating the batch loss Loss of the training based on back propagation by using the classification cross entropy to obtain an optimal emotion prediction node. According to the method, a self-attention mechanism is introduced, proper weights are allocated to the two modal features, and fusion features are obtained, so that the emotion recognition accuracy is improved.

Description

technical field [0001] The present invention proposes a new feature-level fusion method that is different from traditional fusion methods. This method extracts the features of text and audio modalities separately, introduces a self-attention mechanism, assigns appropriate weights to the two modal features, and obtains fusion features, thereby improving the accuracy of emotion recognition. The specific method involved is: a feature-level fusion method based on a self-attention mechanism. Background technique [0002] Emotion detection is a hot research field with broad application prospects. Machines can enhance human-computer interaction by accurately recognizing human emotions and responding to these emotions. Emotion recognition is also important in the fields of medicine, education, marketing, security, and surveillance. Applications. [0003] Traditional emotion detection methods only use a single visual or audio signal, which has certain limitations. Compared with si...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06F17/27G10L17/26G06N3/04
CPCG10L17/26G06N3/04G06F18/2415G06F18/253
Inventor 吴哲夫陈智伟
Owner ZHEJIANG UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products