Speech separation method based on fuzzy membership function

A technology of fuzzy membership function and speech separation, applied in speech analysis, instruments, etc., can solve problems such as low speech quality

Active Publication Date: 2013-09-25
JILIN UNIV
View PDF3 Cites 25 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0004] The present invention provides a speech separation method based on fuzzy memb

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Speech separation method based on fuzzy membership function
  • Speech separation method based on fuzzy membership function
  • Speech separation method based on fuzzy membership function

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0099] The invention discloses a speech separation method based on a fuzzy membership function. The method simulates the human auditory system and uses speech pitch features to separate speech, including the following steps:

[0100] (1) Speech preprocessing process, such as figure 2 As shown, the process includes: input a voice signal, perform endpoint detection and pre-emphasis on it, and the pre-emphasis coefficient is 0.95;

[0101] (2) Auditory feature extraction process, such as image 3 As shown, the process includes:

[0102] (1) The preprocessed signal is processed by a gamma-pass filter that simulates the cochlea.

[0103] 1) The time-domain response of the gamma-pass filter is

[0104] g c (t)=t i-1 exp(-2πb c t)cos(2πf c +φ c )U(t)(1≤c≤N)

[0105] Among them, N is the number of filters, c is the ordinal number of the filter, and the value is in the range of [1, N] according to the frequency. i is the order number of the filter, take i=4. U(t) is the uni...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention provides a speech separation method based on a fuzzy membership function, and belongs to speech separation methods. The fuzzy membership function is combined in the speech separation method, so that more accurate definition of a membership degree of speech time frequency units to a target signal is obtained. An auditory oscillation model is built through human ear auditory system simulation, and speech pitch characteristics are extracted. The speech time frequency units are marked according to pitch cycle characteristics to form foreground streams and background streams. Whether the corresponding time frequency units are targets or noise is judged according to different marks. In the synthesis stage, a target unit multiplies a high weight, a noise unit multiplies a low weight, and resynthetized speech is obtained. By means of the speech separation method, the pitch cycle can be estimated more precisely, the time frequency units can be marked more accurately on the basis of characteristic clues, and the more complete target speech can be obtained. Due to the fact that the method is based on the pitch characteristics of the speech, good separation effects in complex and non-stationary noise are achieved, and the application range is wide.

Description

technical field [0001] The invention relates to a speech separation method, in particular to a membership function-based computing auditory scene analysis speech separation method. Background technique [0002] Speech separation is used to reduce the interference of noise on the speech signal and improve the speech quality of the target signal. It is often used in the front-end part of speech recognition or speaker recognition to improve the system recognition effect. Computational auditory scene analysis simulates human hearing and effectively separates target speech from aliased signals. It is currently the mainstream speech separation method. [0003] Chinese patent CN102592607 adopts the blind separation speech separation method, uses subband decomposition and independent component analysis to separate the target speech and separates the target speech, optimizes the speech separation effect of the traditional blind separation method, but its separation effect is poor und...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G10L21/0272
Inventor 林琳徐鹤孙晓颖陈健胡封晔魏晓丽
Owner JILIN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products