Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Speech-emotion recognition method based on improved Fukunage-koontz transformation

A speech emotion recognition and emotion recognition technology, which is applied in speech recognition, speech analysis, instruments, etc., can solve the problems that the internal structure of speech cannot be effectively reflected, the manifold structure cannot be effectively reflected, and the correlation characteristics cannot be considered.

Inactive Publication Date: 2011-07-27
邹采荣 +1
View PDF0 Cites 2 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

At present, the commonly used speech features are mainly extracted in units of frames, and then the global statistical features can be obtained, which cannot effectively characterize the structural characteristics of speech frames. The use of principal component analysis and linear discriminant analysis to reduce dimensionality is mostly based on global considerations, which cannot be effective. Reflect the internal structure of the voice
The traditional Fukunage-koontz transformation is a global-based dimensionality reduction method, which cannot consider the correlation characteristics between sample values ​​in local time, and cannot effectively reflect the manifold structure inside the sample.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speech-emotion recognition method based on improved Fukunage-koontz transformation
  • Speech-emotion recognition method based on improved Fukunage-koontz transformation
  • Speech-emotion recognition method based on improved Fukunage-koontz transformation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0056] The technical solutions of the present invention will be further described below in conjunction with the drawings and embodiments.

[0057] figure 1 This system block diagram is mainly divided into 3 major blocks: feature extraction and analysis module, improved Fukunage-koontz transformation, and emotion recognition module.

[0058] 1. Emotional feature extraction and analysis module

[0059] 1. Linear prediction cepstral coefficient parameter extraction

[0060] First, according to figure 2 In the feature parameter extraction process, the feature sentences to be extracted are pre-emphasized, including high-pass filtering, and the detection of the start endpoint and end endpoint of the sentence; then the sentence is divided into frames and windows, and the Durbin fast algorithm is used to calculate the linear prediction coefficient of each frame. , Calculate the complex cepstrum according to the linear prediction coefficient and linear predictor cepstral coeffici...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a speech-emotion recognition method based on improved Fukunage-koontz transformation (FKT). Transformation by utilizing the method can effectively realize feature dimension reduction. During dimension reduction, in consideration of the internal manifold structure of speech, a parameter tau capable of describing inter-frame relevance is introduced, and weighting is performed when feature covariance is calculated so as to find the feature with maximum / minimum variance after dimension reduction. In addition, by taking the variance as the discrimination information of classification, the recognition of various speech emotions is realized by adopting a k nearest neighbor recognition method. Compared with the prior similar recognition method, the method can effectively improve recognition rate.

Description

technical field [0001] The invention relates to a speech recognition method, in particular to a speech emotion recognition method. Background technique [0002] The speech emotion automatic recognition technology mainly includes two problems: one is to use the features in the speech signal as emotion recognition, that is, the problem of emotional feature extraction, including feature extraction and selection; the other is how to classify specific speech data, That is, the problem of pattern recognition, including various pattern recognition algorithms, such as nearest neighbors, neural networks, support vector bases, etc. [0003] The emotional feature parameters commonly used in speech emotion recognition include: linear prediction coefficient, linear prediction cepstral coefficient, Mel cepstral coefficient, short-term energy, fundamental frequency, formant, etc. Among them, the linear prediction coefficient can be considered as an estimation of the all-pole model of the ...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L15/06G10L15/08G10L15/02G10L15/00
Inventor 邹采荣赵力赵艳魏昕
Owner 邹采荣
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products