Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Urban sound event classification method based on dual-feature 2-DenseNet in parallel

A classification method and parallel technology, applied in speech analysis, computer components, instruments, etc., to achieve the effect of high classification accuracy and strong generalization ability

Active Publication Date: 2019-10-29
JIANGNAN UNIV
View PDF13 Cites 9 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] In order to solve the problem of needing a sound classification method with higher accuracy in existing practical applications, the present invention provides an urban sound event classification method based on dual-feature 2-DenseNet parallel connection, which has a more efficient fusion ability for feature information and can obtain more High classification accuracy and stronger generalization ability

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Urban sound event classification method based on dual-feature 2-DenseNet in parallel
  • Urban sound event classification method based on dual-feature 2-DenseNet in parallel
  • Urban sound event classification method based on dual-feature 2-DenseNet in parallel

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0049] like Figure 1 to Figure 4 Shown, the present invention is based on the urban sound event classification method of double feature 2-DenseNet parallel connection, and it comprises the following steps:

[0050] S1: Collect audio data to be processed, preprocess the audio data to be processed, and output audio frame sequence; preprocessing operations include: sampling and quantization, pre-emphasis processing, and windowing;

[0051] S2: Perform time domain and frequency domain analysis on the audio frame sequence, extract Mel-frequency cepstral coefficients (Mfcc) and Gammatone frequency cepstral coefficients (Gfcc) respectively, and output double feature vectors Sequence; the structure of the double-feature feature vector is a 2-dimensional vector, the first bit vector is the number of frames after sampling the audio data, and the second bit vector is the dimension of the feature, that is, the Mel frequency cepstral coefficient and the gamma pass Dimension of the cepstr...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention provides an urban sound event classification method based on dual-feature 2-DenseNet in parallel. The method has the advantages of being more efficient in fusion capability for feature information, higher in classification accuracy and stronger in generalization capability. The method comprises the following steps: S1, acquiring and processing to-be-processed audio data and outputting an audio frame sequence; S2, performing time domain and frequency domain analysis on the audio frame sequence, and respectively outputting a Mayer frequency cepstrum coefficient feature vector sequence and a gammatone cepstrum coefficient feature vector sequence; S3, constructing a classification model, wherein the classification model comprises a network model constructed by combining a two-order Markov model on the basis of a DenseNet model; the classification model taking a two-order DenseNet model as a basis to construct a basis network, arranging the basic network to be two parallel paths, and training the classification model to obtain a well-trained classification model; and S4, after processing the feature vector sequences output in the step S2, dividing the feature vector sequences output in the step S2 into two paths in a dual-feature mode to input into the well-trained classification model for classification and recognition so as to obtain a classification result of soundevents.

Description

technical field [0001] The invention relates to the technical field of sound recognition, in particular to an urban sound event classification method based on dual-feature 2-DenseNet parallel connection. Background technique [0002] Building a smart city complex in modern society is an important trend in urban development. Using a huge sensor network to collect audio data such as traffic conditions and noise levels in the target city, and analyzing the data to guide urban design and technical decision-making, It is one of the current ideas for building smart cities. The classification of urban sound events is mainly used in noise monitoring, urban security, soundscape assessment, multimedia information retrieval, etc. In the prior art, network models such as SVM, VGG, and DCNN are used in urban sound event classification technology. In 2014, the Iustin Salamon team used the Mel cepstral coefficient feature and the support vector machine model to formulate a baseline, and i...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L25/51G10L25/24G10L25/27G06K9/62
CPCG10L25/51G10L25/24G10L25/27G06F18/24147G06F18/241
Inventor 曹毅黄子龙刘晨盛永健李巍张宏越
Owner JIANGNAN UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products