Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A Robust Speech Feature Extraction Method Based on Sparse Decomposition and Reconstruction

A sparse decomposition, speech feature technology, applied in speech analysis, speech recognition, instruments, etc., can solve the problem of ignoring the probability of mutual conversion of atomic prior probabilities

Active Publication Date: 2011-12-21
哈尔滨工业大学高新技术开发总公司
View PDF5 Cites 36 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0010] 3. Signal reconstruction: Most current methods consider the contribution of atoms with equal probability, ignoring the prior probability of atoms and the probability of mutual conversion of each atom

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A Robust Speech Feature Extraction Method Based on Sparse Decomposition and Reconstruction
  • A Robust Speech Feature Extraction Method Based on Sparse Decomposition and Reconstruction
  • A Robust Speech Feature Extraction Method Based on Sparse Decomposition and Reconstruction

Examples

Experimental program
Comparison scheme
Effect test

specific Embodiment approach 1

[0027] Specific implementation mode one: combine figure 1 Describe this implementation mode, this implementation mode comprises concrete steps as follows:

[0028] Step 1, preprocessing, divide the read-in speech into frames and add windows, so that the speech is converted from a time sequence to a frame sequence;

[0029] Step 2. Perform discrete Fourier transform and calculate the power spectrum: X a ( k ) = | | Σ n = 0 N - 1 x ( n ) e - j 2 kπ ...

specific Embodiment approach 2

[0039] Specific implementation mode two: the specific process of step one in the implementation mode one is:

[0040] The input of the present invention is a discrete-time signal of speech, and the speech must be preprocessed first, including framing and windowing. The purpose of framing is to divide the time signal into overlapping speech segments, i.e. frames; next, add a window to each frame of speech; the window functions widely used at present have Hamming window and Hanning window, and the present invention adopts Hamming window window:

[0041]

[0042] Among them, n is the time sequence number, and L is the window length. Other steps are the same as those in Embodiment 1.

specific Embodiment approach 3

[0043] Specific implementation mode three: the specific process of step three in the first implementation mode is: select a representative frame from the training speech frame as an atom under the condition that the error of the reconstructed training sample is the smallest; for the noise atom, consider dynamic update, To track the influence of time-varying noise, Algorithm I is proposed:

[0044] Algorithm I

[0045]

[0046] where Φ is the atom dictionary, d(f t , Φ)=min{d i |d i =||f t -φ i || 2}; where φ i is the i-th atom in the current Φ, ||·|| 2 is the 2-norm operator; the algorithm first empty the atom dictionary, define d(f t, φ)=0, φ represents an empty set; then starting from the first frame of speech, add atoms one by one according to the minimum distance criterion, and discard the speech frames that are very similar to the atoms in the atom dictionary in the remaining speech frames, otherwise, add Atom dictionary; this algorithm can ensure that the sig...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a robust speech characteristic extraction method based on sparse decomposition and reconfiguration, relating to a robust speech characteristic extraction method with sparse decomposition and reconfiguration. The robust speech characteristic extraction method solves the problems that 1, the selection of an atomic dictionary has higher the time complexity and is difficult tomeet the sparsity after signal projection; 2, the sparse decomposition of signals has less consideration for time relativity of speech signals and noise signals; and 3, the signal reconfiguration ignores the prior probability of atoms and mutual transformation of all the atoms. The robust speech characteristic extraction method comprises the following detailed steps of: step 1, preprocessing; step 2, conducting discrete Fourier transform and solving a power spectrum; step 3, training and storing the atom dictionary; step 4, conducting sparse decomposition; step 5, reconfiguring the speech spectrum; step 6, adding a Mel triangular filter and taking the logarithm; and step 7, obtaining sparse splicing of Mel cepstrum coefficients and a Mel cepstrum to form the robust characteristic. The robust speech characteristic extraction method is used for the fields of multimedia information processing.

Description

technical field [0001] The invention relates to a speech feature extraction method of sparse decomposition and reconstruction. Background technique [0002] It has always been a human dream to allow machines to perceive and understand speech like humans, and speech recognition brings hope to this dream. After decades of development, speech recognition technology has made great achievements, from the initial isolated word recognition to today's large vocabulary continuous speech recognition (Large Vocabulary Continue Speech Recognition, LVCSR), speech recognition technology has stepped out of the laboratory and gradually moved towards application. In an ideal environment, the recognition rate of the current small vocabulary and medium vocabulary recognition system can reach more than 99%, and the recognition rate of the LVCSR system can also exceed 95%, but in the case of noise, the recognition rate will drop sharply. For decades, researchers have tried various methods to e...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L15/02
Inventor 韩纪庆何勇军
Owner 哈尔滨工业大学高新技术开发总公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products