Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Voice separating method based on auditory center system under multi-sound-source environment

An auditory center and speech separation technology, applied in speech analysis, instruments, etc., can solve problems such as complex stages and difficult computer processing

Inactive Publication Date: 2014-07-02
CHONGQING UNIV OF POSTS & TELECOMM
View PDF5 Cites 14 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

This method is mainly based on the calculation of harmonics. The auditory scene analysis system can obtain good speech separation results. However, the stage of feature extraction and clue organization is very complicated, and it is difficult to achieve in computer processing.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Voice separating method based on auditory center system under multi-sound-source environment
  • Voice separating method based on auditory center system under multi-sound-source environment
  • Voice separating method based on auditory center system under multi-sound-source environment

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0030] A non-limiting embodiment is given below in conjunction with the accompanying drawings to further illustrate the present invention.

[0031] Such as figure 2It is a structural diagram of the principle of speech separation based on the auditory central system in a multi-sound source environment given in this paper. Multiple speech signals first pass through the peripheral auditory model and are divided into different frequency channels according to different frequencies; then pass through the superior olivary complex for speech information extraction; finally use the hypothalamic cell model to separate multiple sound sources into a single voice signal.

[0032] Acoustic studies have shown that the external auditory canals of both ears have different frequency responses to signals of different frequencies. The basilar membrane located inside the cochlea is an important link in the processing of the central auditory system.

[0033] The basilar membrane has the functio...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides a voice separating method based on an auditory center system under the multi-sound-source environment, and relates to the field of digital signal processing. The voice separating method solves the boundedness that most voice recognition methods can only be used under the low-noise and single-sound-source environment. To carry out the voice recognition under the multi-sound-source noisy environment, voice separation needs to be achieved firstly. According to the voice separating method based on the auditory center system, the multi-spectra analysis is carried out on voice signals through a peripheral hearing model, a coinciding nerve cell comprises a general cynapse model and a general cell model to integrate the information of an ITD and the information of an ILD, the voice separation is achieved in a hypothalamus cell model, and the experiment shows that the method has good robustness.

Description

technical field [0001] The invention belongs to the field of artificial intelligence, and in particular relates to a speech separation method based on an auditory central system in a multi-sound source environment. Background technique [0002] At present, there are roughly three technologies for speech separation in a multi-sound source environment: computational auditory scene analysis, independent variable analysis, and speech separation based on the auditory central system. [0003] The implementation of independent component analysis requires a rational assumption of the mixing mode and statistical characteristics of the speech signal: in the time domain, the mixed speech signal must meet the standard alignment, and the initial speech signal must satisfy the statistical independence of each other and have at most one and only One is a Gaussian signal, and the number of mixed speech signals must be more than the number of initial speech signals. Obviously, it is difficul...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G10L21/0272
Inventor 罗元张毅胡章芳童开国徐晓东
Owner CHONGQING UNIV OF POSTS & TELECOMM
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products