Eureka AIR delivers breakthrough ideas for toughest innovation challenges, trusted by R&D personnel around the world.

A two-channel sound source localization method based on deep learning

A deep learning and sound source localization technology, applied in neural learning methods, localization, instruments, etc., can solve problems such as performance degradation, and achieve the effect of improving performance, reducing impact, and improving robustness

Active Publication Date: 2021-12-21
INST OF ACOUSTICS CHINESE ACAD OF SCI
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

In the sound source localization technology with azimuth as the output, the azimuth of the sound source can be estimated by using the orthogonality between the signal space and the noise space, but the performance of this type of algorithm drops significantly when reverberation exists

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A two-channel sound source localization method based on deep learning
  • A two-channel sound source localization method based on deep learning
  • A two-channel sound source localization method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0062]In order to make the purpose, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below in conjunction with the drawings in the embodiments of the present invention. Obviously, the described embodiments It is a part of embodiments of the present invention, but not all embodiments. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

[0063] figure 1 It is a flowchart of a two-channel sound source localization method based on deep learning. Such as figure 1 As shown, the method includes:

[0064] Step S101: Framing, windowing, and Fourier transform are performed on the microphone pickup data of the left channel and the right channel, respectively, to obtain a time-freque...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a two-channel sound source localization method based on deep learning, which includes: respectively performing frame division, windowing and Fourier transformation on the microphone pick-up data of the left channel and the right channel to obtain the first channel and the second channel The time-frequency domain pickup signal; use deep learning to estimate the phase-sensitive masking from the time-frequency domain pickup signal and its corresponding time-frequency domain direct sound signal, use the phase-sensitive masking to guide the estimation of the sound source direction information, and use the phase-sensitive masking calculation The accuracy of direction information estimation, using deep learning to obtain the direction information enhancement value from the estimated direction information and the accuracy of direction information estimation, using the enhanced direction information and the accuracy of direction information estimation to construct a weighted histogram, and finally select the histogram The direction corresponding to the peak is taken as the direction of the sound source. The invention estimates the direction of the sound source from the data picked up by the dual-channel microphone, fully utilizes the generalization ability of the neural network, and has better robustness to the noise reverberation environment.

Description

technical field [0001] The present invention relates to the technical field of sound source localization, in particular to a two-channel sound source localization method based on deep learning. Background technique [0002] At present, the sound source localization technology mainly estimates the direction of the sound source from the data containing background noise and reverberation picked up by the microphone array, so as to achieve better performance in terms of sound source separation and sound source tracking. In the sound source localization technology with azimuth as the output, the azimuth of the sound source can be estimated by using the orthogonality between the signal space and the noise space, but the performance of this kind of algorithm drops obviously when reverberation exists. Using deep learning, the robustness of the algorithm in the presence of noise and reverberation can be better improved. Most sound source localization algorithms based on deep learnin...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Patents(China)
IPC IPC(8): G01S5/20G06N3/04G06N3/08
CPCG01S5/20G06N3/04G06N3/08
Inventor 李军锋程龙彪夏日升颜永红
Owner INST OF ACOUSTICS CHINESE ACAD OF SCI
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Eureka Blog
Learn More
PatSnap group products