Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A cross-database speech emotion recognition method based on deep domain adaptive convolutional neural network

A convolutional neural network and speech emotion recognition technology, applied in the field of cross-database speech emotion recognition, to achieve the effect of high recognition accuracy and narrowing feature differences

Active Publication Date: 2021-07-27
SOUTHEAST UNIV
View PDF5 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

The difficulty of cross-database speech emotion recognition is to extract appropriate speech features and reduce the difference in feature distribution between source database data and target database data

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A cross-database speech emotion recognition method based on deep domain adaptive convolutional neural network
  • A cross-database speech emotion recognition method based on deep domain adaptive convolutional neural network
  • A cross-database speech emotion recognition method based on deep domain adaptive convolutional neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0031] This embodiment provides a method for recognizing the emotion of speech across databases based on a deep domain adaptive convolutional neural network, such as figure 1 shown, including the following steps:

[0032] (1) Obtain two speech databases in different languages, which are respectively used as a training database and a test database, wherein each speech database includes several speech signals and corresponding emotion category labels.

[0033] (2) The speech signals in the training database and the test database are preprocessed respectively to obtain the spectrum diagram of each speech signal. Speech signal spectrogram like figure 2 shown.

[0034] (3) Establish a deep domain adaptive convolutional neural network, which includes a first convolutional layer, a first pooling layer, a second convolutional layer, and a second pooling layer connected in sequence , the first fully connected layer, the second fully connected layer and the softmax layer, specifical...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a cross-database speech emotion recognition method based on a deep domain adaptive convolutional neural network, comprising: (1) acquiring training databases and testing databases with different languages ​​(2) converting speech signals in the training database and testing database to Respectively process and obtain the spectrogram of each segment of the speech signal; (3) set up a convolutional neural network; (4) input the speech signal spectrogram of the training database and the test database into the convolutional neural network respectively for training. During training, first calculate the training database The maximum mean difference between the output of the fully connected layer corresponding to the speech signal spectrogram of the test database, and then calculate the cross entropy between the softmax layer output of the training database and its emotional category label, and finally add the maximum mean difference and cross entropy as The network loss uses the backpropagation algorithm to update the network parameters to complete the network training; (5) Obtain the spectrum map of the speech signal to be recognized, input the trained deep convolutional neural network, and output the emotional category. The accuracy rate of the present invention is higher.

Description

technical field [0001] The invention relates to speech data emotion recognition, in particular to a method for cross-database speech emotion recognition based on deep domain adaptive convolutional neural network. Background technique [0002] Speech emotion recognition is a research hotspot in the field of pattern recognition and artificial intelligence, and has broad application prospects. Traditional voice emotion recognition is often trained and tested on a single voice database, but in real life, the voice data of the training set and the test set are often very different, such as from different languages, so it is performed on different voice databases. Speech emotion recognition is closer to real life scenarios, which is a cross-database speech emotion recognition problem. The difficulty of cross-database speech emotion recognition is to extract appropriate speech features and reduce the difference in feature distribution between source database data and target databa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G10L25/63G10L25/30G10L25/18
CPCG10L25/18G10L25/30G10L25/63
Inventor 郑文明刘佳腾宗源路成
Owner SOUTHEAST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products