Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Sound scene classification method based on width and depth neural network

A deep neural network and scene classification technology, applied in the field of machine hearing, can solve problems such as long training time, difficulty in improving classification accuracy, and difficulty in meeting practical requirements.

Active Publication Date: 2020-09-29
SOUTH CHINA UNIV OF TECH
View PDF6 Cites 7 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] It is difficult to improve the classification accuracy rate of wide neural network application in acoustic scene classification after it has been improved to a certain extent, and it is difficult to meet practical requirements.
In the past, the classification of acoustic scenes was mostly based on deep neural networks, but the long training time is the shortcoming of deep neural networks.
In fact, the wide neural network can achieve a high classification accuracy rate for certain categories of acoustic scenes, but the classification accuracy rate for other categories is low, resulting in an overall accuracy rate that cannot be increased after a certain level.

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Sound scene classification method based on width and depth neural network
  • Sound scene classification method based on width and depth neural network
  • Sound scene classification method based on width and depth neural network

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0066] The present embodiment discloses a sound scene classification method based on the width and depth neural network. The schematic flowchart of the sound scene classification method based on the width and depth neural network is as follows: figure 1 shown, including the following specific steps:

[0067] S1. Create an audio data set:

[0068] S1.1. The DCASE 2018 Task5 public data set is used as the audio sample of the audio data set. The data set continuously records the sound events in a home environment for a week, with a total of 9 sound scene classifications and 72984 audio samples, each of which has the same length. is 10s, the sampling rate is 16kHz, and the number of quantization bits is the same;

[0069] S1.2. Extract the 20-dimensional logarithmic Mel spectrum feature of each audio sample in the audio data set, that is, each audio sample corresponds to a feature map with a pixel number of 20×399, and then perform After the mean is normalized, the map is conver...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a sound scene classification method based on a width and depth neural network, and the method comprises the following steps: firstly extracting logarithm Mel spectrum featuresfrom a sound scene audio sample, and dividing the features into a training set and a test set; designing a width neural network and a deep joint probability network; taking the logarithm Mel spectrumcharacteristics of each audio sample in the training set as input, and pre-training the two networks; constructing a joint discriminant classification tree model according to a pre-training result, and training and optimizing the joint discriminant classification tree model; and finally, inputting the logarithm Mel spectrum characteristics of each audio sample in the test set into the joint discrimination classification tree model, and identifying a sound scene corresponding to each audio sample. The constructed joint discriminant classification tree model can overcome the defects of poor generalization ability and weak stability of a single network, and the sound scene classification effect is improved by utilizing the advantage complementation characteristics of the width neural networkand the deep neural network.

Description

technical field [0001] The invention belongs to the technical field of machine hearing, relates to the width and depth learning technology, and in particular relates to a sound scene classification method based on the width and depth neural network. Background technique [0002] People's daily activities contain various sound events, and the combination of these sound events constitutes various sound scenes. Acoustic scene classification technology has a wide range of application scenarios, such as audio monitoring, multimedia retrieval, automatic assisted driving, smart home and other fields. [0003] It is difficult to improve the classification accuracy of the application of the wide neural network in the classification of acoustic scenes to a certain extent, and it is difficult to meet the practical requirements. In the past, the classification of acoustic scenes was mostly based on deep neural networks, but the long training time was the disadvantage of deep neural net...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
Patent Type & Authority Applications(China)
IPC IPC(8): G06K9/62G06N3/04G06N3/08
CPCG06N3/08G06N3/047G06N3/048G06N3/045G06F18/213G06F18/24323G06F18/214Y02T90/00
Inventor 黄张金李艳雄张文浩林子珩陈奕纯谭煜枫
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products