Voice affective characteristic extraction method capable of combining local information and global information

A technology of global information and emotional features, applied in speech analysis, instruments, etc., can solve problems such as noise sensitivity, and achieve the effect of simple method, easy implementation, and simple feature extraction framework

Active Publication Date: 2014-01-22
SOUTH CHINA UNIV OF TECH
View PDF5 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, MFCC ignores the energy distribution information inside the Mel filter and the local distribution information between different filter results in each frame, and is se

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice affective characteristic extraction method capable of combining local information and global information
  • Voice affective characteristic extraction method capable of combining local information and global information
  • Voice affective characteristic extraction method capable of combining local information and global information

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0041] Such as figure 1 As shown, a speech emotion feature extraction method combining local and global information, including the following steps:

[0042] The first step: Framing and windowing the speech signal to obtain S k (N). Use the following two formulas for framing, where N represents the frame length, inc represents the number of sampling points that deviate from the next frame, fix(x) finds the nearest integer to x, fs represents the sampling rate of the voice signal, from the voice data, bw is the frequency resolution in the spectrogram, k represents the kth frame, and the present invention takes 60HZ. The windowing function is the Hamming window.

[0043] N=fix(1.81*fs / bw), (1)

[0044] inc=1.81 / (4*bw), (2);

[0045] The second step: to S k (N) Perform a short-time Fourier transform F k (N), and to F k (N) Use formula (3) to obtain the Mel frequency G k (N).

[0046] Mel(f)=2595*lg(1+f / 700), (3);

[0047] Step 3: First use formula (4) to define a filte...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a voice affective characteristic extraction method capable of combining local information and global information, which can extract three characteristics and belongs to the technical fields of voice signal processing and mode recognition. The voice affective characteristic extraction method comprises the following steps of (1) framing voice signals; (2) carrying out Fourier transform on each frame; (3) filtering a Fourier transform result by utilizing a Mel filter, solving energy from the filtering result, and taking the logarithm to the energy; (4) carrying out local Hu operation on the taken logarithm result to obtain a first characteristic; (5) carrying out discrete cosine transform on each frame after being subjected to the local Hu operation to obtain a second characteristic; (6) carrying out difference operation on the obtained logarithm result of the step (3), and carrying out the discrete cosine transform on each frame of the difference operation result to obtain a third characteristic. According to the voice affective characteristic extraction method capable of combining the local information and the global information, which is disclosed by the invention, the voice of each emotion can be quickly and effectively expressed, the application range comprises fields of voice retrieval, voice recognition, emotion computation and the like.

Description

technical field [0001] The invention relates to a speech signal processing and pattern recognition technology, in particular to a speech emotion feature extraction method combining local and global information. Background technique [0002] With the continuous development of information technology, social development puts forward higher requirements for affective computing. For example, in terms of human-computer interaction, a computer with emotional capabilities can acquire, classify, recognize, and respond to human emotions, thereby helping users to obtain an efficient and friendly feeling, and can effectively reduce people's frustration in using computers, and even Can help people understand their own and other people's emotional world. For example, this kind of technology is used to detect whether the driver's energy is concentrated, the level of pressure felt, etc., and make a relative response. In addition, affective computing can also be applied in related industri...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G10L25/63G10L25/03
Inventor 文贵华孙亚新
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products