Speech emotion recognition method based on manifold

A speech emotion recognition and manifold technology, applied in speech recognition, speech analysis, instruments, etc., can solve the problems of small target equation, fine granularity, ignoring the relationship between adjacent frames, etc., to enhance performance and improve the accuracy of recognition Effect

Active Publication Date: 2013-12-11
SOUTH CHINA UNIV OF TECH
View PDF6 Cites 18 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The first method: generating statistical features through global statistics, its disadvantage is that it ignores the local information of the speech signal
[0007] The second type: only generate statistical features through the Gaussian Mixture Model-Universal Background Model GMM-UBM (Gaussian Mixture Model-Universal Background Model). Although this method can reflect the information of each frame of speech, it ignores the gap between adjacent

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speech emotion recognition method based on manifold
  • Speech emotion recognition method based on manifold
  • Speech emotion recognition method based on manifold

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0048] Such as figure 1 , 2 , a manifold-based method for speech emotion recognition, consists of the following sequential steps:

[0049] (1) Extract the following speech features of the test sentence: MFCC, LPCC, LFPC, ZCPA, PLP and RASTA-PLP, where the number of Mel filters of MFCC and LFPC is 40, and the order of linear prediction of LPCC, PLP and R-PLP 12, 16, 16 respectively, ZCPA frequency segments are 0, 106, 223, 352, 495, 655, 829, 1022, 1236, 1473, 1734, 2024, 2344, 2689, 3089, 3522, 4000, and finally get The features extracted by the 6 feature extraction methods of each sentence and each frame, the corresponding feature dimensions are 39, 40, 12, 16, 16, 16, so the number of features extracted per frame is 39+40+12+16+16 +16;

[0050] (2) Calculate the first-order difference D of all features of each voice, and the F in D takes the 6 features described in the first step, and then calculate the local mean and variance of all F and D to obtain LDM, LDS, LM And th...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a speech emotion recognition method based on manifold. The speech emotion recognition method based on the manifold comprises the following steps that the phonetic features of a test statement are extracted, wherein the phonetic features comprise MFCC, LPCC, LFPC, ZCPA, PLP and RASTA-PLP; the local average value and the variance of each extracted phonetic feature are calculated, the local average value and the variance of the first difference of each extracted phonetic feature are calculated, the local average values and the variances are connected in series, and therefore local statistical features of the test statement are formed; a universal background model (UBM) and the local statistical features of the test statement are applied so that a specific Gaussian mixture model (GMM) of the test statement can be generated, then, all average values of the GMM are connected into a vector, and the vector serves as a feature vector of the test statement; according to the features selected according to the integrated feature selection algorithm and the multi-cluster feature selection algorithm (MCFS), the feature vector of the test statement is changed; a support vector machine classification model is applied, the feature vector, existing after feature selection, of the test statement serves as input, and emotion classes of the statement are tested in a classified mode. The degree of accuracy of speech emotion recognition of the speech emotion recognition method based on the manifold is high.

Description

technical field [0001] The invention relates to the field of speech signal processing and recognition, in particular to a manifold-based speech emotion recognition method. Background technique [0002] With the continuous development of information technology, people put forward higher requirements for the intelligence of computers. In terms of human-computer interaction, a computer with emotion recognition capabilities can recognize human emotions and perform corresponding actions based on the recognition results, which can facilitate users to operate the device and bring users a good experience. For example, the use of emotion recognition technology can detect whether the driver's energy is concentrated, the level of pressure he feels, etc., and decide whether to issue an alarm based on the recognition results to improve driving safety; at the same time, emotion recognition can also be applied to robots, smart toys, games, In e-commerce and other related industries, it he...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G10L15/02G10L15/06
Inventor 文贵华孙亚新李辉辉
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products