Sound scene classification method based on network model fusion

A scene classification and network model technology, which is applied in biological neural network models, neural learning methods, speech analysis, etc., can solve the problems of poor robustness, pattern recognition algorithm recognition ability greatly affected by environmental changes, misjudgment and missed judgment, etc. problem, to achieve the effect of improving the recognition rate and robustness

Active Publication Date: 2019-12-20
南京天悦电子科技有限公司
View PDF6 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] 2) The recognition ability of the pattern recognition algorithm is greatly affected by the change of the environment, and the robustness is poor;
[0008] 3) Traditional classifiers have weak classification ability and no learning ability
[0009] In addition, a video-based event detection method used in t...

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sound scene classification method based on network model fusion
  • Sound scene classification method based on network model fusion
  • Sound scene classification method based on network model fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described below in conjunction with the accompanying drawings.

[0041] Such as Figures 1 to 7 As shown, taking 6 models as examples, the acoustic scene classification method based on network model fusion of the present invention is introduced. Include the following steps,

[0042] Step (1), first divide the sample into frames, the frame length is 50ms, and the frame shift is 20ms; secondly, calculate FFT for each frame of data, and the number of FFT points is 2048; thirdly, use 80 gamma-pass filter banks to calculate the gamma-pass Filter cepstral coefficients; use the Mel filter bank with 80 subband filters to calculate the logarithmic Mel spectrogram; finally, calculate the first-order and second-order differences of the Mel spectrum, and finally obtain the multi-channel input features.

[0043]Step (2), constructing six different input features through different channel separation methods and audio cutting methods; constructi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a sound scene classification method based on network model fusion. According to the method, multiple different input characteristics are created through a sound track separation mode, an audio cutting mode and other modes, gamma pass filter cepstrum coefficients, Mel spectral characteristics and first-order and second-order differences of audio signals are extracted to serve as the input characteristics, multiple different corresponding convolutional neural network models are trained respectively, and lastly a support vector machine stacking method is adopted to realizea final fusion model. Through the method, the audio input characteristics with strong recognition are extracted by the adoption of the sound track separation mode, the audio cutting mode and other modes, a single-channel convolutional neural network and a two-channel convolutional neural network are constructed, a unique model fusion structure is generated finally, richer and more stereoscopic information can be obtained, the recognition rate and robustness of different sound scene classifications are effectively improved, and the method has a good application prospect.

Description

technical field [0001] The invention relates to the technical field of acoustic scene classification, in particular to an acoustic scene classification method based on network model fusion. Background technique [0002] The sound scene classification technology is to use computing means to complete the classification of sound scenes according to the information contained in different sound scenes. This technology is of great significance in improving the automation of machines, allowing machines to automatically perceive environmental features, retrieve audio content, and improve the performance of multimedia electronic products. [0003] The features used in traditional acoustic scene classification mainly include: features such as zero-crossing rate and energy in the time domain or features in the frequency domain and cepstrum domain. Commonly used classification methods include: simple threshold judgment method, Gaussian Mixture Model (Gaussian MixtureModel, GMM) method,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
IPC IPC(8): G10L25/30G10L25/24G10L25/51G06N3/08G06N3/04
CPCG10L25/30G10L25/24G10L25/51G06N3/08G06N3/045
Inventor 唐闺臣梁瑞宇王青云包永强冯月芹李明
Owner 南京天悦电子科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products