Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Multi-channel speech enhancing method based on auditory perception model

A technology for auditory perception and speech enhancement, applied in speech analysis, hearing aids, instruments, etc., can solve problems such as distortion and inability to realize phase in real time, and achieve the effect of overcoming phase distortion, ensuring signal reconstruction effect, and low computational complexity

Inactive Publication Date: 2014-04-09
INST OF ACOUSTICS CHINESE ACAD OF SCI
View PDF5 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The purpose of the present invention is to address the defects of the prior art, provide a multi-channel speech enhancement method based on the auditory perception model, realize the simulation of human auditory resolution in the case of a small number of channels, the present invention has both weighted splicing The high efficiency of the summation structure overcomes the problems of real-time implementation and phase distortion in the current frequency conversion filter bank method

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Multi-channel speech enhancing method based on auditory perception model
  • Multi-channel speech enhancing method based on auditory perception model
  • Multi-channel speech enhancing method based on auditory perception model

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0043] The technical solutions of the present invention will be described in further detail below with reference to the accompanying drawings and embodiments.

[0044] A multi-channel speech enhancement method based on an auditory perception model in the embodiment of the present invention can be applied to digital hearing aids. In the embodiment of the present invention, the weighted splicing and adding structure is combined with the all-pass transformation in the signal analysis process, which has high efficiency , the advantages of real-time implementation, the human ear resolution is simulated with a small number of channels, and the all-pass inverse transformation is added in the signal synthesis process to solve the problem of phase distortion.

[0045] figure 1 It is a flowchart of a multi-channel speech enhancement method based on an auditory perception model according to an embodiment of the present invention, as shown in the figure, and specifically includes the foll...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to a multi-channel speech enhancing method based on an auditory perception model. The method comprises the steps that input signal non-even channels are divided into multiple channel signals; the noise level of each channel is detected, and noise level data are obtained; according to the noise level data, the channel gain of each channel is obtained by computing; the product of the channel signal and the channel gain of each channel is used as a gain signal of the channel; the gain signal of each channel subjected to signal synthesizing, and output signals are obtained; and the output signals are transmitted. A filter which simulates the auditory perception model and is used in the method combines a weighting splicing adding structure and all-pass conversion, the fact that human ear distinguishability is simulated under the situation of a small number of the channels is achieved, and meanwhile computation complexity is low. All-pass inverse transformation operation is added in the signal synthesizing process, the problem of phase distortion in the prior art is solved, and the method can be used for real-time signal processing.

Description

technical field [0001] The invention relates to a speech digital signal processing technology, in particular to a multi-channel speech enhancement method based on an auditory perception model. Background technique [0002] Speech enhancement is an important branch of speech signal processing, and its purpose is to improve sound quality, improve clarity and intelligibility, and reduce auditory fatigue. One of the main methods of speech enhancement is spectral subtraction, which estimates the power spectrum of clean speech by subtracting the noise power spectrum from the noisy speech power spectrum. The traditional spectral subtraction is to subtract a same spectral subtraction parameter from the entire frequency domain after a frame of speech undergoes fast Fourier transform. However, the non-stationary noise in the speech and the actual environment is non-uniformly distributed in the frequency domain. Correspondingly, noise signals have different influences on speech signa...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L21/0208H04R25/00
Inventor 孟晓辉肖灵
Owner INST OF ACOUSTICS CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products