Voice emotion recognition model and method based on joint feature representation

A speech emotion recognition and joint feature technology, applied in speech analysis, instruments, etc., can solve the problems of not making full use of the complementarity of different features, low emotion recognition performance, poor speech emotion modeling ability, etc., to improve generalization. performance, enhance the description ability, reduce the effect of parameter redundancy

Active Publication Date: 2018-11-27
PEKING UNIV SHENZHEN GRADUATE SCHOOL
View PDF8 Cites 19 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

So far, neural network-based speech emotion recognition methods only learn emotional deep features from a single feature (such as spectral or hand-crafted features)
However, speech contains complex information, and various features can be extracted. Existing methods do not make full use of the complementarity between different features, which makes the modeling ability of speech emotion poor, resulting in relatively poor performance of emotion recognition. high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice emotion recognition model and method based on joint feature representation
  • Voice emotion recognition model and method based on joint feature representation
  • Voice emotion recognition model and method based on joint feature representation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0057] In the following, the present invention is further described through embodiments in conjunction with the accompanying drawings, but the scope of the present invention is not limited in any way.

[0058] The present invention provides a voice emotion recognition method based on joint feature representation. The method flow is as follows figure 1 As shown, the convolutional recurrent neural network is improved. By fusing the deep features and manual features learned from the frequency spectrum of the convolutional recurrent neural network, the two are mapped to the same feature space through the hidden layer for classification, making full use of the speech The emotion information carried can more effectively model the voice emotion, thereby improving the accuracy of voice emotion recognition.

[0059] image 3 A structural block diagram of a speech emotion recognition model based on joint feature representation provided for implementing the present invention according to an e...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a voice emotion recognition model and method based on joint feature representation, and relates to voice emotion recognition technology. A convolutional recurrent neural network model is improved, a hidden layer in the neural network is configured to learn the joint feature representation of a spectral depth feature and a manual feature, and the joint feature extraction andsentiment classification are integrated into an end-to-end network model. The joint feature utilizes the complementarity between the spectral depth feature and the manual feature, makes full use of the emotional information carried in the voice, and more perfectly models the voice emotion. In addition, the end-to-end network model reduces parameter redundancy due to an intermediate output layer.The voice emotion recognition method based on joint feature representation improves the recognition accuracy of the voice emotions compared with an original voice emotion recognition method based on apure convolutional recurrent neural network.

Description

Technical field [0001] The present invention relates to speech emotion recognition technology, in particular to a speech emotion recognition model (HSF-CRNN) construction and speech emotion recognition method based on a convolutional recurrent neural network of joint feature representation. Background technique [0002] Emotion recognition helps to provide a humanized experience for human-computer interaction, so that computers can perceive the user's emotional state and analyze them, and then generate corresponding responses. It is an important ability necessary for computers in the future. Among them, voice is the basic way of human communication, and voice emotion recognition is particularly important. Speech emotion recognition is the process of calibrating emotion types for a given speech segment. Specifically, its task is to extract the acoustic features that can express emotions from the collected speech signals, and then map these features into certain types of emotions. ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Applications(China)
IPC IPC(8): G10L25/63G10L25/30G10L25/24
CPCG10L25/24G10L25/30G10L25/63
Inventor 邹月娴罗丹青
Owner PEKING UNIV SHENZHEN GRADUATE SCHOOL
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products