Embedded type speech emotion recognition method and device
What is Al technical title?
Al technical title is built by PatSnap Al team. It summarizes the technical point description of the patent document.
A speech emotion recognition and embedded technology, applied in speech recognition, speech analysis, instruments, etc., can solve the problem of low recognition rate
Inactive Publication Date: 2012-10-17
SOUTHEAST UNIV
View PDF4 Cites 33 Cited by
Summary
Abstract
Description
Claims
Application Information
AI Technical Summary
This helps you quickly interpret patents by identifying the three key elements:
Problems solved by technology
Method used
Benefits of technology
Problems solved by technology
[0004] The problem solved by the present invention is: in order to overcome the defect of low recognition rate of traditional speech emotion recognition in non-specific persons, and at the same time to solve the problem of lack of speech emotion recognition devices with good human-computer interaction functions on the market, combined with the above background and needs , the present invention provides an embedded speech emotion recognition method and device thereof. This system can recognize the speaker's emotions such as calmness, happiness, anger, fear, and calmness in a small embedded device, and according to the different emotions carried by the speaker's voice, take different action
Method used
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more
Image
Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
Click on the blue label to locate the original text in one second.
Reading with bidirectional positioning of images and text.
Smart Image
Examples
Experimental program
Comparison scheme
Effect test
Embodiment 1
[0036] An embedded speech emotion recognition method, comprising the following steps:
[0037] Step 1: receiving the input of the emotional speech segment to be identified;
[0038] Step 2: digitalize the emotional speech segment to be identified to provide a digital speech signal;
[0039] Step 3: Preprocessing the emotional digital voice signal X(n) to be recognized, including pre-emphasis, framing, windowing, and endpoint detection:
[0040] Step 3.1: Pre-emphasize the emotional digital voice signal X(n) to be recognized as follows:
[0041] X ( n ) ‾ = X ( n ) - αX ( n - 1 ) - - - ( 1 )
[0042] In the formula, α...
Embodiment 2
[0108]An operating device for an embedded speech emotion recognition method, the device mainly includes: a central processing unit 101, a power supply 102, a clock generator 103, a Nand type flash memory 104, a Nor type flash memory 105, an audio codec chip 106, a microphone 107, a loudspeaker 108, keyboard 109, liquid crystal display 110, universal serial bus interface large-capacity storage device 111, it is characterized in that, the operating system of described Nor type flash memory 105 storage device, file system, guide loading module, described central processing unit 101 adopts The 32-bit embedded microprocessor based on the ARM architecture is the core, and the Nand type flash memory 104 preserves the software implementation of the speech recognition method, including speech preprocessing methods, feature extraction methods, emotion model training modules, and Gaussian mixture model emotion recognition models; The above-mentioned universal serial bus interface mass sto...
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more
PUM
Login to view more
Abstract
The invention relates to an embedded type speech emotion recognition method and an embedded type speech emotion recognition device. The method comprises a feature extraction method, an emotion model training method, a Gaussian mixture model and an emotion recognition method. The method is as follows: parameters of a speech emotion recognition model can be adjusted in a self-adoption manner according to the recognition result of a speaker module, and an unspecific speaker speech emotion recognition question is transformed into a specific speaker speech emotion recognition problem. The device comprises a central processor, a power supply, a clock generator, a Nand Flash storage, a Nor Flash storage, an audio coding-decoding chip, a microphone, a loudspeaker, a keyboard, an LCD (Liquid Crystal Display) display and an USB (Universal Serial Bus) interface storage. According to the embedded type speech emotion recognition method and device, a speaker recognition model is added to the speech emotion recognition, and therefore, the problem that the speech emotion recognition is suddenly declined under an unspecific speaker condition is solved, and the identity recognition function is brought to the device.
Description
technical field [0001] The patent of the present invention relates to a speech emotion recognition technology, in particular to an embedded speech emotion recognition method and device, belonging to the field of speech emotion recognition technology. Background technique [0002] Automatic speech emotion recognition technology is a relatively marginal technology in the IT industry. Speech, as a communication medium between people, carries rich emotional information. Emotion plays an important role in human perception, decision-making and other processes, and plays an important role in human communication. With the development of science and technology, human-machine communication is becoming more and more important in people's daily life. Using voice to conduct natural and harmonious human-computer interaction has been the goal that people have been striving for for a long time. Speech emotion recognition is an important content of harmonious human-computer interaction. I...
Claims
the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more
Application Information
Patent Timeline
Application Date:The date an application was filed.
Publication Date:The date a patent or application was officially published.
First Publication Date:The earliest publication date of a patent with the same application number.
Issue Date:Publication date of the patent grant document.
PCT Entry Date:The Entry date of PCT National Phase.
Estimated Expiry Date:The statutory expiry date of a patent right according to the Patent Law, and it is the longest term of protection that the patent right can achieve without the termination of the patent right due to other reasons(Term extension factor has been taken into account ).
Invalid Date:Actual expiry date is based on effective date or publication date of legal transaction data of invalid patent.