Acoustic Scene Classification Method Based on Network Model Fusion

A technology of scene classification and network model, applied in the direction of biological neural network model, neural learning method, speech analysis, etc., can solve problems such as poor robustness, misjudgment and missed judgment, and no learning ability, so as to improve the recognition rate and robustness. sticky effect

Active Publication Date: 2021-09-21
南京天悦电子科技有限公司
View PDF6 Cites 0 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0007] 2) The recognition ability of the pattern recognition algorithm is greatly affected by the change of the environment, and the robustness is poor;
[0008] 3) Traditional classifiers have weak classification ability and no learning ability
[0009] In addition, a video-based event detection method used in the prior art, due to adverse factors such as insufficient light, dim environment, and too much floating dust in the air, the returned video image is blurred, which is likely to cause misjudgment and missed judgment, as well as acoustic scene classification. , problems with low recognition rate and poor robustness

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Acoustic Scene Classification Method Based on Network Model Fusion
  • Acoustic Scene Classification Method Based on Network Model Fusion
  • Acoustic Scene Classification Method Based on Network Model Fusion

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0040] The present invention will be further described below in conjunction with the accompanying drawings.

[0041] Such as Figures 1 to 7 As shown, taking 6 models as examples, the acoustic scene classification method based on network model fusion of the present invention is introduced. Include the following steps,

[0042] Step (1), first divide the sample into frames, the frame length is 50ms, and the frame shift is 20ms; secondly, calculate FFT for each frame of data, and the number of FFT points is 2048; thirdly, use 80 gamma-pass filter banks to calculate the gamma-pass Filter cepstral coefficients; use the Mel filter bank with 80 subband filters to calculate the logarithmic Mel spectrogram; finally, calculate the first-order and second-order differences of the Mel spectrum, and finally obtain the multi-channel input features.

[0043]Step (2), constructing six different input features through different channel separation methods and audio cutting methods; constructi...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

PUM

No PUM Login to view more

Abstract

The invention discloses a sound scene classification method based on network model fusion, and constructs a variety of different input features by means of channel separation and audio cutting, and extracts gamma-pass filter cepstral coefficients and plum The spectral features and their first-order and second-order differences are used as input features, and a variety of corresponding convolutional neural network models are trained respectively. Finally, the support vector machine stacking method is used to realize the final fusion model. The present invention uses methods such as channel separation and audio cutting to extract highly recognizable audio input features, constructs a convolutional neural network of single and dual channels, and finally generates a unique model fusion structure, which can obtain more abundant and three-dimensional information , which effectively improves the classification recognition rate and robustness of different acoustic scenes, and has a good application prospect.

Description

technical field [0001] The invention relates to the technical field of acoustic scene classification, in particular to an acoustic scene classification method based on network model fusion. Background technique [0002] The sound scene classification technology is to use computing means to complete the classification of sound scenes according to the information contained in different sound scenes. This technology is of great significance in improving the automation of machines, allowing machines to automatically perceive environmental features, retrieve audio content, and improve the performance of multimedia electronic products. [0003] The features used in traditional acoustic scene classification mainly include: features such as zero-crossing rate and energy in the time domain or features in the frequency domain and cepstrum domain. Commonly used classification methods include: simple threshold judgment method, Gaussian Mixture Model (Gaussian MixtureModel, GMM) method,...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to view more

Application Information

Patent Timeline
no application Login to view more
Patent Type & Authority Patents(China)
IPC IPC(8): G10L25/30G10L25/24G10L25/51G06N3/08G06N3/04
CPCG10L25/30G10L25/24G10L25/51G06N3/08G06N3/045
Inventor 唐闺臣梁瑞宇王青云包永强冯月芹李明
Owner 南京天悦电子科技有限公司
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Try Eureka
PatSnap group products