Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Emotion recognition method, system and device based on multi-modal feature fusion and medium

A feature fusion and emotion recognition technology, applied in the field of emotion recognition, can solve problems such as unconsidered, low recognition efficiency, high model complexity, etc., to achieve the effect of improving efficiency, improving accuracy, and reducing model complexity

Pending Publication Date: 2021-12-17
GUANGZHOU UNIVERSITY
View PDF0 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, in existing technologies, multiple neural network models are often used to perform emotion recognition on speech, expression and other features, and then make comprehensive judgments based on the respective recognition results. On the one hand, this method needs to train multiple recognition models for each The characteristics of different types are recognized, the model complexity is high, and the recognition efficiency is low. On the other hand, each feature is recognized separately, and the influence of the association between each feature on the emotion recognition result is not considered, so the accuracy of emotion recognition is relatively low. Low

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Emotion recognition method, system and device based on multi-modal feature fusion and medium
  • Emotion recognition method, system and device based on multi-modal feature fusion and medium
  • Emotion recognition method, system and device based on multi-modal feature fusion and medium

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048]Embodiments of the present invention are described in detail below, examples of which are shown in the drawings, wherein the same or similar reference numerals designate the same or similar elements or elements having the same or similar functions throughout. The embodiments described below by referring to the figures are exemplary only for explaining the present invention and should not be construed as limiting the present invention. For the step numbers in the following embodiments, it is only set for the convenience of illustration and description, and the order between the steps is not limited in any way. The execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art sexual adjustment.

[0049] In the description of the present invention, multiple means two or more. If the first and the second are described only for the purpose of distinguishing technical features, it cannot be understood as indicating or...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses an emotion recognition method, system and device based on multi-modal feature fusion and a medium, and the method comprises the steps: obtaining preset first voice information and corresponding first visual information, and carrying out the feature extraction of the first voice information and the first visual information, and obtaining a voice feature image and an expression feature image; performing feature fusion on the voice feature image and the expression feature image to obtain a first multi-modal feature, and constructing a training data set according to the first multi-modal feature; inputting the training data set into a pre-constructed convolutional neural network for training to obtain a trained multi-modal feature recognition model; and identifying the emotion of the person to be tested according to the multi-modal feature identification model. On one hand, the model complexity is reduced, the model training and emotion recognition efficiency is improved, on the other hand, the influence of the voice features and the expression features on the emotion recognition result of the model is considered, the emotion recognition accuracy is improved, and the emotion recognition method can be widely applied to the technical field of emotion recognition.

Description

technical field [0001] The present invention relates to the technical field of emotion recognition, in particular to an emotion recognition method, system, device and medium based on multimodal feature fusion. Background technique [0002] Emotion recognition is an important part of realizing full human-computer interaction. Emotion recognition can be applied in many different fields. For example, emotion recognition can be used to monitor and predict fatigue status. The task of emotion recognition is challenging because human emotions lack temporal boundaries and different people express emotions in different ways. Despite the current rich experience in emotion recognition for inferring the subject's emotion from speech or other forms, such as visual information (facial gestures), the accuracy of single-modal emotion recognition is not high, and the generalization ability is poor . [0003] With the advent of deep neural networks in the past decade, there have been many b...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/045G06F18/241G06F18/253G06F18/214Y02D10/00
Inventor 陈首彦刘冬梅孙欣琪张健杨晓芬赵志甲朱大昌
Owner GUANGZHOU UNIVERSITY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products