Emotion recognition method for enhancing coupling hidden markov model (HMM) voice-vision fusion

An emotion recognition and emotion technology, applied in character and pattern recognition, instruments, computing, etc., can solve the problem of low recognition rate

Inactive Publication Date: 2013-02-13
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF3 Cites 16 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] The purpose of the present invention is to solve the problem of low recognition rate in the prior art, and propose an emotion recognition method that enhances the speech-visual fusion of coupled HMM

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Emotion recognition method for enhancing coupling hidden markov model (HMM) voice-vision fusion
  • Emotion recognition method for enhancing coupling hidden markov model (HMM) voice-vision fusion
  • Emotion recognition method for enhancing coupling hidden markov model (HMM) voice-vision fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0143] The implementation of the method of the present invention will be described in detail below in conjunction with the accompanying drawings and specific examples.

[0144] In this example, 5 experimenters (2 males and 3 females) read sentences with 7 basic emotions (happy, sad, angry, disgusted, fearful, surprised and neutral) in a guided (Wizard of Oz) scenario , the camera simultaneously records facial expression images and sound data from the front. In the scene script, there are 3 different sentences for each emotion, and each person will repeat each sentence 5 times. The emotional video data of four people are randomly selected as the training data, and the video data of the remaining person is used as the test set. The whole recognition process is independent of the experimenter. Then, the experimental data was re-labeled using the activation-evaluation space rough classification method, that is, samples were divided into positive and negative categories along the ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides an emotion recognition method for enhancing coupling hidden markov model (HMM) voice-vision fusion and belongs to the field of automatic emotion recognition. According to the method, two characteristic behaviors of facial expression and voice are fused, an improved expectation-maximization (EM) algorithm is used to train a continuous two-component coupling HMM, and weight of each sample is considered and continuously updated during training so that the training process stresses on the sample which is difficult to identify. Compared with known identification methods, the method is capable of obviously improving accuracy of classification.

Description

technical field [0001] The invention relates to an emotion recognition method of multi-channel information fusion, in particular to a speech-visual fusion emotion recognition method of enhanced coupling HMM (Hidden Markov Model, Hidden Markov Model), which belongs to the field of automatic emotion recognition. Background technique [0002] Researchers in various disciplines have done a lot of work in the field of automatic emotion recognition. Sentiment can be represented using discrete category methods (such as the six basic emotion categories proposed by Ekman), or using continuous dimensional methods (such as activation-evaluation space methods), or using evaluation-based methods. A variety of different features, such as facial expressions, speech, body posture, and context, can be used to identify a person's emotional state. Researchers have done a lot of work on unimodal emotion recognition and analysis. [0003] Fusing information from both speech and visual channels...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G06K9/62G06K9/66
Inventor 吕坤张欣贾云得
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products