Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

An emotion recognition method for prisoners based on multimodal feature fusion based on self-weight differential encoder

A technology of differential coding and feature fusion, applied in the field of emotional computing, can solve the problems of difficult to accurately judge the true emotions of prisoners, low recognition rate, poor robustness, etc., to eliminate degradation problems, improve expressive ability, and improve accuracy Effect

Active Publication Date: 2020-06-30
SHANDONG UNIV
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Due to the strong concealment of inmates' behavior clues and serious psychological precautions, relying on single-modal data for emotion recognition may generate a lot of noise, making it difficult to accurately judge the true emotions of inmates, and single-modal emotion recognition exists The characteristics of low rate and poor robustness

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • An emotion recognition method for prisoners based on multimodal feature fusion based on self-weight differential encoder
  • An emotion recognition method for prisoners based on multimodal feature fusion based on self-weight differential encoder
  • An emotion recognition method for prisoners based on multimodal feature fusion based on self-weight differential encoder

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0089] A method for emotion recognition of inmates based on self-weight differential encoder for multi-modal feature fusion, such as figure 2 shown, including the following steps:

[0090] (1) Data preprocessing: preprocess the data of the three modalities of text, voice and micro-expression, including text data, voice data, and micro-expression data, so that it meets the input requirements of the corresponding models of different modalities;

[0091] Text data refers to the text data of prison inmates’ conversations with family members / relatives and friends during remote video meetings; voice data refers to the audio data of prison inmates’ conversations with family members / relatives and friends during remote video meetings; micro-expression data refers to prison Facial micro-expression data of inmates during remote video meetings with family members / relatives and friends.

[0092] (2) Feature extraction: extract the emotional information contained in the data of the three ...

Embodiment 2

[0109] A method for emotion recognition of inmates based on self-weight differential encoder for multi-modal feature fusion according to Embodiment 1, the difference is: in the step (1),

[0110] For text data, the preprocessing process includes: segmenting the text data into words, and converting the text data into a data structure that the TextCNN model can receive and calculate according to the word segmentation results and the word vectors corresponding to the words.

[0111] In the process of data conversion, all text data including each word is numbered, and a dictionary is generated. The content in the dictionary is that each word corresponds to a serial number, and then each text is segmented, according to the serial number corresponding to the word in the dictionary. The text is converted into a mathematical sequence composed of a series of serial numbers, and then the serial number corresponds to the initialized word vector list, and the sequence is converted into mat...

Embodiment 3

[0126] According to the method for emotion recognition of inmates who carry out multi-modal feature fusion based on self-weight differential encoder according to embodiment 1, the difference is: in the step (2),

[0127] For text data, the feature extraction process includes: extracting the features of the text data through the TextCNN model;

[0128] The TextCNN model uses multiple kernels of different sizes to extract key information in sentences, so that it can better capture local correlations. The biggest advantage of TextCNN is its simple network structure. In the case of a simple model network structure, the introduction of trained word vectors has a very good effect, so that our model can accelerate the convergence speed while having a good effect.

[0129] For speech data, the feature extraction process includes:

[0130] c. Run OpenSMILE on the Linux operating platform, take the voice file in WAV format as input, select cmobase2010.conf as the standard feature data ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The present invention relates to an emotion recognition method for inmates based on self-weight differential encoder for multimodal feature fusion, comprising the following steps: (1) data preprocessing: preprocessing text data, voice data, and micro-expression data respectively, Make it meet the input requirements of different modal corresponding models; (2) feature extraction: extract the emotional information contained in the preprocessed text, voice and micro-expression data respectively, and obtain the corresponding feature vector; (3) Feature fusion: Use the self-weight differential encoder to perform feature fusion on the feature vector: (4) Train the model to obtain the optimal emotion recognition model. The present invention uses a self-weight differential encoder for multi-modal feature fusion, and effectively reduces the limitations of single-modal data and the negative impact of error information through the cross-complementation of multiple modal features, making the extracted emotional features more abundant , effective and accurate, and improve the emotion recognition effect of prisoners.

Description

technical field [0001] The invention relates to an emotion recognition method for inmates based on self-weight differential encoder for multimodal feature fusion, and belongs to the technical field of emotion calculation. Background technique [0002] Since the end of the 20th century, emotion has played an increasingly important role in the cognitive process. Contemporary cognitive scientists compare emotion with classic cognitive processes such as perception, learning, memory, and speech. Research on emotion itself and the interaction between emotion and other cognitive processes has become a research hotspot in contemporary cognitive science. Emotion recognition is also become an emerging field of research. [0003] The application of emotion recognition in daily life is to calculate the target person's emotion by computer when the target person's emotion is naturally expressed. It plays an irreplaceable role in many fields. For example, in information appliances and sm...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G06K9/62G06K9/00G06N3/04
CPCG06V20/41G06V20/46G06N3/045G06F18/241
Inventor 李玉军张文真贲晛烨刘治朱孔凡胡伟凤
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products