Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Special personnel emotion recognition method and system based on multi-modal data fusion

A technology for emotion recognition and special personnel, applied in the field of emotion recognition for special personnel, can solve problems such as intelligent emotion recognition systems without special personnel, and achieve the effect of facilitating cross-layer transmission, reducing the amount of parameters, and reducing the problem of gradient disappearance

Pending Publication Date: 2022-01-04
SHANDONG UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] In special places, there is currently no intelligent emotion recognition system for special people, especially in family meeting scenes and conversation inquiry scenes

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Special personnel emotion recognition method and system based on multi-modal data fusion
  • Special personnel emotion recognition method and system based on multi-modal data fusion
  • Special personnel emotion recognition method and system based on multi-modal data fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0071] An emotion recognition method for special personnel based on multi-modal data fusion, which uses deep learning technology to mine deep semantic features of data and realize cross-modal fusion, output the emotional category of special personnel in a probabilistic manner, and construct deep learning Network, to achieve the hybrid fusion of multi-modal data and accurate emotion recognition, such as figure 1 shown, including the following steps:

[0072] (1) Preprocessing the acquired physiological parameters, attitude parameters, audio, and video of the special personnel, and extracting the corresponding physiological parameters, attitude parameters, audio, and video space-time feature vectors;

[0073] Physiological parameters and posture parameters of special personnel are collected through wearable devices for a period of time (such as 3 seconds). Physiological parameters include heart rate, breathing rate, body temperature, skin electricity, ECG, and EEG; posture param...

Embodiment 2

[0077] According to a kind of special personnel emotion recognition method based on multimodal data fusion described in embodiment 1, its difference is:

[0078] In step (1), preprocessing the acquired physiological parameters, attitude parameters, audio, and video of the special personnel refers to: performing data cleaning on the acquired physiological parameters and attitude parameters of the special personnel, performing audio filtering on the audio, and performing audio filtering on the audio. The video is decoded; the physiological parameters, posture parameters, audio, and video of the acquired special personnel in the same time period are subjected to data normalization and data alignment operations.

[0079] In step (1), the space-time feature vector of physiological parameters is obtained, specifically: the physiological parameters collected at each sampling time are spliced ​​into a vector; if the sampling frequency is inconsistent, the highest sampling frequency is ...

Embodiment 3

[0093] According to a kind of special personnel emotion recognition method based on multimodal data fusion described in embodiment 1 or 2, its difference is:

[0094] The specific implementation process of step (2) is as follows:

[0095] Flatten the feature maps of physiological parameters, attitude parameters, audio, and video into feature vectors, and then concatenate all the feature vectors together to form a global feature vector to realize the fusion of feature layers;

[0096] A multi-layer fully connected network is used to realize joint feature vector extraction, and the feature vectors in this period of time are temporarily stored; for example Figure 5 As shown, it specifically refers to: input the global feature vector into the multi-layer fully connected network, the neurons in each layer of the multi-layer fully connected network are connected with all the neurons in the next layer according to a certain weight, and the neurons in each layer take The value is th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a special personnel emotion recognition method and system based on multi-modal data fusion, and the method comprises the following steps: (1) preprocessing obtained physiological parameters, posture parameters, audio and video of special personnel, and extracting corresponding space-time feature vectors; (2) fusing the space-time feature vectors of the physiological parameters, the posture parameters, the audio and the video, and extracting a combined feature vector; and (3) inputting the joint feature vector into a trained emotion recognition model for emotion recognition. According to the invention, collection, processing and fusion of multi-modal data are realized, and the emotion recognition accuracy of special personnel is improved. According to the invention, a supervisor can timely grasp the emotional condition of special personnel, a targeted supervision and correction strategy is adopted, the occurrence of extreme events is reduced, and the safety and stability of a special place are maintained.

Description

technical field [0001] The invention relates to a special personnel emotion recognition method and system based on multimodal data fusion, and belongs to the technical fields of artificial intelligence and signal processing. Background technique [0002] Deep learning technology can fully mine the deep space-time characteristics contained in the data. By constructing a neural network based on deep learning and using the stochastic gradient descent method based on backpropagation to train the network in a supervised manner, it can intelligently identify and classify the objects of interest. Data fusion technology can make use of the complementarity and redundancy between different modal data to make up for the defects of low quality of single modal data and indistinct distinction of characteristics of different object categories. Through the fusion analysis of multi-modal data, it can effectively improve The accuracy of object recognition classification. [0003] In special ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/044G06N3/045G06F18/241G06F18/251G06F18/253
Inventor 翟超倪志祥李玉军杨阳
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products