Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Speech emotion recognition based on slice convolution

A speech emotion recognition and speech technology, which is applied in the computer field, can solve the problems of long speech signal time and inability to perform unified time-frequency analysis, and achieve the effect of improving recognition efficiency.

Active Publication Date: 2017-11-17
NANJING UNIV OF POSTS & TELECOMM
View PDF5 Cites 5 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0006] The object of the present invention provides a method for speech emotion recognition based on time slicing. The method slices the speech signal through the time window, and the speech signal after the slice is obtained by time-frequency analysis. Reconstruction of the speech sequence relationship before slicing, this method solves the problem that the speech signal has a long time and cannot be analyzed in a unified time-frequency manner, and uses deep learning for emotion classification, which greatly improves the recognition efficiency

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speech emotion recognition based on slice convolution
  • Speech emotion recognition based on slice convolution
  • Speech emotion recognition based on slice convolution

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0023] The following will clearly describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are some of the embodiments of the present invention, but not all of them. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without creative efforts fall within the protection scope of the present invention.

[0024] like figure 1 and image 3 As shown, the present invention provides a method for speech emotion recognition based on time slicing, which is to put the time domain features of sound into the space domain for recognition through sequences, including the following steps:

[0025] Step 1: Set the maximum time length of a voice. For example, in the data set of the embodiment of the present invention, the maximum voice length is 10S. For speech signals that do not...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a speech emotion recognition method based on time slicing. The method comprises a first step of speech equal length complementing: setting the maximum time length of a sound and performing time complementing of a read-in speech signal by white noise; a second step of speech signal slicing: slicing the speech signal obtained in the first step, that is, performing the time equal length slicing and speech envelope unit slicing, and the like; a third step of speech segment time-frequency analysis by using a time-frequency analysis tool; a fourth step of sequence reconstruction; a fifth step of emotion marking; a sixth step of image feature extraction; a seventh step of emotion training; an eighth step of speech emotion recognition: obtaining an emotion feature set by using the first to sixth steps for the newly input speech signal, inputting the emotion feature set into an emotion classification model obtained in the seventh step, and finally obtaining the emotion type of the current speech. The method adopts the time slice convolution and deep learning for the speech emotion classification, and well improves the recognition efficiency.

Description

technical field [0001] The invention relates to a voice emotion recognition method based on slice convolution, which belongs to the technical field of computers. Background technique [0002] The extraction of speech emotion features has always been a key issue in speech emotion recognition. At present, the acoustic features of speech emotion recognition can be roughly summarized into prosodic features, spectrum-based correlation features and sound quality features. Among them, prosody refers to the changes in pitch, sound length, speed, and severity that override the semantic symbols in speech. It is a structural arrangement for the expression of speech streams. The commonly used prosody features are sometimes long and fundamental ,Energy. Spectrum-based correlation features are considered to reflect the correlation between vocal tract shape changes and movement. Linear spectra used in speech emotion recognition generally include: LPC, OSALPC, LFPC, etc. Cepstrum features...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L15/06G10L25/30G10L25/63G10L15/08
CPCG10L15/06G10L15/063G10L15/08G10L25/30G10L25/63
Inventor 李华康蔡汇聪金旭孙国梓李涛
Owner NANJING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products