Emotional speaker recognition method based on frequency spectrum translation

A speaker recognition and spectrum technology, applied in speech analysis, instruments, etc., can solve problems such as non-compliance with application requirements

Active Publication Date: 2009-04-29
ZHEJIANG UNIV
View PDF0 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

If you just let users provide a variety of emotional voice

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Emotional speaker recognition method based on frequency spectrum translation
  • Emotional speaker recognition method based on frequency spectrum translation
  • Emotional speaker recognition method based on frequency spectrum translation

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0064] When the inventive method is implemented:

[0065] Step 1: Audio Preprocessing

[0066] Audio preprocessing is divided into four parts: sample quantization, zero drift removal, pre-emphasis and windowing.

[0067] 1. Sampling and quantization

[0068] A) Use a sharp cut-off filter to filter the collected audio signal to be tested so that its Nyquist frequency F N 4KHZ;

[0069] B) Set audio sampling rate F=2F N ;

[0070] C) For audio signal s a (t) Sampling by period to obtain the amplitude sequence of the digital audio signal s ( n ) = s a ( n F ) ;

[0071] D) Use pulse code modulation (PCM) to quantize and encode s(n), and obtain the quantized representation of the amplitude sequence s'(n).

[0072] 2. Go to zero drift

[0073] A) calculating the mean s of the qu...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses an emotional speaker recognition method based on frequency spectrum translation motion, comprising the following steps: (1) after a voice frequency signal to be tested is collected, the treatments such as sampling quantification, removing null shift, preemphasis and windowing are sequentially carried out on the voice frequency signal, so that a voice frame which is treated by windowing is obtained; (2) fast foourier transform (FFT) is carried out on the voice frame which is treated by windowing to obtained a spectrum signal, and a plurality of groups of spectrum signals that respectively have different formant distributions are obtained by adopting the method of frequency spectrum translation motion; (3) the frequency spectrum is filtered by adopting a masure filter, and phonetic feature is obtained by discrete cosine compression; (4) the phonetic feature of the voice frequency signal to be tested is extracted according to the process flow of the steps (1)-(3), atmosphere scoring is calculated by adopting the method of maximum scoring, and the identification result is provided. The method changes the distribution situation of the formant of neutral emotional voice, and synthesizes the voice frequency spectrum having different formant distributions, so as to enhance the familiarity of the system for various emotional voices and improve the rate of identification of the system.

Description

technical field [0001] The invention relates to signal processing and pattern recognition, and mainly relates to an emotional speaker recognition method based on frequency spectrum translation. Background technique [0002] Speaker recognition technology refers to the technology of using signal processing and pattern recognition methods to identify the speaker's identity according to his or her voice. Emotional speaker recognition is speaker recognition in which the training and test utterances contain emotional speech. In emotional speaker recognition, due to the emotional inconsistency between the test speech and the training speech, the recognition rate of the system will drop significantly. The method proposed in this patent is to reduce the degradation of system performance caused by the emotional inconsistency between training and testing speech. [0003] At present, the speaker recognition method is mainly divided into two steps. The first step is feature extraction...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G10L17/00G10L17/02G10L19/26G10L25/63
Inventor 杨莹春吴朝晖单振宇
Owner ZHEJIANG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products