Method for carrying out blind source separation on convolutionary aliasing voice signals

A blind source separation and speech signal technology, applied in speech analysis, speech recognition, instruments, etc., can solve problems such as BSS uncertainty

Inactive Publication Date: 2010-03-10
SHANDONG UNIV
View PDF0 Cites 51 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0014] Aiming at the uncertainty problem of the existing voice signal BSS, the present invention provides a method for blind source separation of conv

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for carrying out blind source separation on convolutionary aliasing voice signals
  • Method for carrying out blind source separation on convolutionary aliasing voice signals
  • Method for carrying out blind source separation on convolutionary aliasing voice signals

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0083] The present invention is to the system block diagram of convolution aliasing speech signal BSS as figure 1 As shown, K sound sources are detected by P sensors after convolution and mixing. The basic process of the BSS algorithm is as follows: first transform to the frequency domain by STFT, and then separate by ICA. Rearrange the ICA separation data with the MSBR algorithm to solve the order uncertainty, then adjust the amplitude, and then transform the separation matrix W(f) in the frequency domain to the time domain through IDFT to obtain the separation matrix W(t) in the time domain, and finally use W(t) convolves the sensor signal to obtain an estimate of the original signal.

[0084] The simulation experiment verifies the performance of the ICA algorithm of the method of the present invention, the impulse response of the global filter and the effect of speech recovery through the following aspects. Among them, the hybrid filter has 300 tap coefficients (such as ...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a method for carrying out blind source separation on convolutionary aliasing voice signals. Firstly, a time domain convolutionary aliasing model is converted into a frequency domain multi-channel linear instantaneous convolutionary aliasing model, which can be realized by the following steps: firstly, converting convolutionary aliasing time domain signals into a frequency domain; then carrying out relatively independent ICA operations on each channel to obtain independent components. Next, the independent components are rearranged by an MSBR algorithm, which specificallycomprising the following steps: firstly, classifying signals of different frequency bands; then progressively obtaining transposed matrixes according to different object functions step by step, wherein the steps of rearrangement are mutually complementary. The MSBR algorithm utilizes the strong relevance of harmonic frequency to improve the iteration accuracy and solves the residual uncertainty of residual frequency bands according to the continuity of adjacent frequency bands and corresponding reference frequencies, and the computational complexity of the MSBR algorithm is approximately in direct proportion to the number of reference frequency bands. The invention improves the convergence efficiency and the accuracy, is more suitable for real-time processing, has good separation performance of convolutionary mixed voice signals and can also be applied to real phonetic environment.

Description

technical field [0001] The invention relates to a method for blind source separation of convolutional and aliased speech signals in a multiple-input multiple-output (MIMO) system without channel state parameters, which can be widely used in neural networks and multi-antenna systems, especially in speech signal processing aspect. Background technique [0002] Blind source separation (BSS) of speech signals is a recent research hotspot. The real speech environment can be approximated as a convolutional aliasing model, so higher requirements are put forward for the convolutional aliasing speech signal BSS. [0003] Traditional convolution and aliasing speech signal BSS algorithms can generally be divided into two categories: [0004] 1. Deconvolution is performed directly in the time domain; [0005] 2. Transform to other transform domains such as wavelet domain or frequency domain for processing. [0006] Since there may be many filtering coefficients, the first type of alg...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G10L15/20
Inventor 刘琚刘清菊杜军董治强
Owner SHANDONG UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products