Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-modal emotion recognition method based on time domain convolutional network

A convolutional network and emotion recognition technology, applied in the fields of deep learning, pattern recognition, audio and video processing, can solve the problems of less information and relatively low accuracy

Active Publication Date: 2021-05-11
SOUTHEAST UNIV
View PDF4 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

Decision fusion is to make a final judgment on the final result with certain rules after the emotion recognition results are obtained by each modal model, which has high flexibility and strong real-time performance. Less volume, lower relative accuracy

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-modal emotion recognition method based on time domain convolutional network
  • Multi-modal emotion recognition method based on time domain convolutional network
  • Multi-modal emotion recognition method based on time domain convolutional network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062] Below in conjunction with accompanying drawing and specific embodiment, further illustrate the present invention, should be understood that these examples are only for illustrating the present invention and are not intended to limit the scope of the present invention, after having read the present invention, those skilled in the art will understand various aspects of the present invention All modifications of the valence form fall within the scope defined by the appended claims of the present application.

[0063] A multi-modal emotion recognition method based on temporal convolutional network, such as figure 1 As shown, the method includes:

[0064] (1) Obtain a number of audio and video samples containing emotional information, sample the video modality data in the samples at intervals, and perform face detection and key point positioning to obtain grayscale face image sequences.

[0065] This step specifically includes:

[0066] (1-1) Sampling the video modality da...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a multi-modal emotion recognition method based on a time domain convolutional network, and the method comprises the steps: carrying out the interval sampling of video modal data in an audio and video sample, carrying out the face detection and key point positioning, and obtaining a gray face image sequence; performing short-time Fourier transform and obtaining a Mel spectrogram through a Mel filter bank; enabling the gray-scale face image sequence and the Mel spectrogram to pass through a face image convolutional network and a spectrogram image convolutional network respectively, and performing feature fusion; inputting the fusion feature sequence into a time domain convolutional network to obtain an advanced feature vector; the advanced feature vectors are subjected to a full connection layer and Softmax regression, the prediction probability of each emotion category is obtained, cross entropy loss is calculated between the prediction probability and actual probability distribution, the whole network is trained through back propagation, and a trained neural network model is obtained. The emotion can be predicted through audios and videos, the training time is short, and the recognition accuracy is high.

Description

technical field [0001] The present invention relates to audio and video processing, pattern recognition, and deep learning technology, and in particular to a multimodal emotion recognition method based on time-domain convolutional network. Background technique [0002] In 1997, Professor Picard first proposed the concept of "affective computing", which involves psychology, cognition, pattern recognition, speech signal processing, physiology, sociology, computer vision, and artificial intelligence. Facial expressions, voice and other information to identify the emotional state shown by humans, so that the machine can better understand human emotions and behaviors, so as to bring a smoother and more efficient interactive experience. Multimodal emotion recognition aims to use expression and voice modal information to identify basic human emotions, which are generally divided into 6 categories, which are happy (Happy), sad (Sad), surprised (Surprise), angry (Angry), Fear (Fear)...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/00G06K9/46G06K9/62G06N3/04G06N3/08G10L25/63
CPCG06N3/08G10L25/63G06V40/168G06V40/172G06V10/44G06N3/047G06F18/2415G06F18/241
Inventor 李克梁瑞宇赵力郭如雪
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products