Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Feature extraction method and device as well as stress detection method and device

A feature extraction and detection technology, applied in speech analysis, speech recognition, instruments, etc., can solve the problem of low accuracy and achieve the effect of accurate features and high precision

Active Publication Date: 2015-04-29
TSINGHUA UNIV +1
View PDF5 Cites 15 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] Since the prior art uses prosodic features in speech data as detection parameters for classification and detection, and the prosodic feature extraction process in units of syllables will be affected by environmental factors such as noise, resulting in prosodic features. Accuracy of accent detection method is not high

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Feature extraction method and device as well as stress detection method and device
  • Feature extraction method and device as well as stress detection method and device
  • Feature extraction method and device as well as stress detection method and device

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0064] Such as figure 1 As shown, the embodiment of the present invention provides a feature extraction method, which can be used for accent detection, and the method includes:

[0065] Step 100, outputting the first frame-level feature vector of the acoustic feature pronunciation attribute through the first classifier according to the preset correspondence between the phoneme and the acoustic feature pronunciation attribute.

[0066] Step 200: Obtain the second frame-level feature vector of the pronunciation attribute of the vowel consonant through the second classifier according to the preset correspondence between the phoneme and the pronunciation attribute of the vowel consonant.

[0067] Step 300: Map the first frame-level feature vector of the acoustic feature pronunciation attribute or the first frame-level feature vector of the vowel-consonant pronunciation attribute to the syllable-level pronunciation feature vector corresponding to the frame level.

[0068] In this ...

Embodiment 2

[0103] Such as Figure 4 As shown, the embodiment of the present invention provides a method for stress detection, the method includes:

[0104] Step 401, receiving detected voice data.

[0105] Step 402, obtaining the voice recognition result of the detected voice data through voice recognition technology;

[0106] Step 403, dividing the detected voice data into syllables according to the voice recognition result;

[0107] Step 404, acquire the syllable-level pronunciation feature vectors of the syllable-divided detected speech data through an accent feature extraction method.

[0108] In this embodiment, the accent feature extraction method in step 404 can be the extraction method provided in Embodiment 1, such as Figure 5 As shown, step 404 may also include:

[0109] Step 501, acquiring the prosody features of the detected speech data.

[0110] In this embodiment, various methods in the prior art may be used for the extraction method of the corresponding prosodic feat...

Embodiment 3

[0115] Such as Figure 6 As shown, the embodiment of the present invention provides a feature extraction device, which can be used for accent detection, and the device includes:

[0116] The acoustic feature extraction module 901 is configured to output the first frame-level feature vector of the acoustic feature pronunciation attribute through the first neural network according to the preset correspondence between the phoneme and the acoustic feature pronunciation attribute.

[0117] The vowel and consonant pronunciation feature extraction module 902 is used to output the first frame-level feature vector of the acoustic feature attribute extracted by the acoustic feature extraction module 904 through the second neural network according to the preset correspondence between phonemes and vowel and consonant pronunciation attributes. A second frame-level feature vector of consonant articulation attributes;

[0118] The mapping module 903 is configured to map the second frame-lev...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a feature extraction method and device as well as a stress detection method and device, relates to the voice detection technology, and aims to solve the problems of low accuracy of stress detection in the prior art. The technical scheme is that the feature extraction method comprises the following steps: according to a preset corresponding relationship between phonemes and acoustic feature pronunciation attributes, a first frame-level eigenvector of the acoustic feature pronunciation attributes is output through a first classifier; a second frame-level eigenvector of vowel and consonant pronunciation attributes is output by a second classifier according to the preset corresponding relationship between preset preset phonemes and the vowel and consonant pronunciation attributes; the first frame-level eigenvector of the acoustic feature pronunciation attributes or the second frame-level eigenvector of the vowel and consonant pronunciation attributes is used for mapping a syllable-level pronunciation eigenvector. The scheme can be applied to a voice detection process.

Description

technical field [0001] The invention relates to speech detection technology, in particular to a feature extraction method and device for stress detection and the method and device for stress detection. Background technique [0002] In English language learning, the accuracy of the pronunciation of each syllable will directly affect the standardization of the speaker's English expression, and mastering the accuracy of English stress pronunciation is a very important link. [0003] At present, the accent detection method of English pronunciation is to first extract the prosodic feature parameters from the speech training data in units of syllables, and then classify and detect the learner's speech data through a classifier according to these prosodic feature parameters and obtain the relevant stress detection results. To determine whether the pronunciation of the accent is accurate, the prosodic features used may include fundamental frequency feature parameters, segment length...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L25/03G10L25/78G10L15/02G10L15/08
Inventor 刘加赵军红袁桦张卫强何亮赵峰邵颖
Owner TSINGHUA UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products