Voice-vision fusion emotion recognition method based on hint nerve networks

A neural network and emotion recognition technology, applied in biological neural network models, character and pattern recognition, instruments, etc., can solve the problem of low recognition rate

Inactive Publication Date: 2013-11-20
BEIJING INSTITUTE OF TECHNOLOGYGY
View PDF5 Cites 34 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to propose a voice-visual fusion emotion recognition method based on clue neural network in order to solve the problem of low recognition rate existing in the prior art

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice-vision fusion emotion recognition method based on hint nerve networks
  • Voice-vision fusion emotion recognition method based on hint nerve networks
  • Voice-vision fusion emotion recognition method based on hint nerve networks

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0092] The implementation of the method of the present invention will be described in detail below in conjunction with the accompanying drawings and specific examples.

[0093] In this example, 6 experimenters (3 males and 3 females) read aloud with 7 discrete basic emotions (happy, sad, angry, disgusted, fearful, surprised and neutral) in a guided (Wizard of Oz) scenario , the two cameras simultaneously capture the face video of the front view, the face video of the side view and voice data. In the scenario script, there are 3 different sentences for each emotion, and each person will repeat each sentence 5 times.

[0094]. The emotion data of three people are randomly selected as the first training data set, which is used to train three neural networks using a single-channel feature data stream. Then randomly select the emotional data of two people as the second training data set for training the multimodal fusion neural network. Using the emotion data of the remaining pe...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a voice-vision fusion emotion recognition method based on hint nerve networks and belongs to the field of automatic emotion recognition. The method has the following basic thought: first, respectively using feature data in threes channels, i.e. the front facial expression, the side facial expression and the voice of a person to independently train one nerve network to execute the recognition of discrete emotion types, adding four hint nodes to the output layer of a nerve network model in the training process, and respectively carrying the hint information of four coarseness types in an activation-evaluation space; and then fusing the output results of the three nerve networks by a multi-modal fusion model, and adopting nerve networks based on hint information training in the multi-modal fusion model. Under the help of the hint information, better feature selection can be produced through the learning of nerve network weighing. The method has the advantages of lower calculated amount, high recognition rate and great robustness and has more remarkable effect on conditions with fewer training data.

Description

technical field [0001] The invention relates to an emotion recognition method of multi-channel information fusion, in particular to a speech-visual fusion emotion recognition method based on clue neural network, belonging to the field of automatic emotion recognition. Background technique [0002] Researchers in various disciplines have done a lot of work in the field of automatic emotion recognition. Emotions can be expressed using discrete category methods (such as the 6 basic emotional categories proposed by Ekman), or using continuous dimension methods (such as activation-evaluation space methods), or using evaluation-based methods. Many different features such as facial expression, speech, body posture and context can be used to recognize a person's emotional state. Researchers have done a lot of work on single-modal emotion recognition and analysis. [0003] Fusing information from both speech and visual channels can improve the accuracy of emotion recognition. The ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06K9/66G06N3/02
Inventor 吕坤张欣
Owner BEIJING INSTITUTE OF TECHNOLOGYGY
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products