Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Breathing sound classification method based on deep learning

A classification method and deep learning technology, applied in speech analysis, instruments, etc., can solve the problems of model instability, easy to fall into local extreme points, poor classification effect, etc., to improve the classification recognition rate, reduce manpower, improve The effect of efficiency

Inactive Publication Date: 2020-09-08
NANKAI UNIV
View PDF5 Cites 8 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

BP neural network classifies positive and abnormal sounds, but the randomness of its weight value selection makes the model after network training unstable and easy to fall into local extreme points, so the classification effect is not good

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Breathing sound classification method based on deep learning
  • Breathing sound classification method based on deep learning
  • Breathing sound classification method based on deep learning

Examples

Experimental program
Comparison scheme
Effect test

Embodiment Construction

[0036] In order to explain in detail the technical content, structural features, achieved goals and effects of the technical solution, the following will be described in detail in conjunction with specific embodiments and accompanying drawings.

[0037] see figure 1 As shown, it is a deep learning-based breath sound classification method method flow Figure 1 , which includes the following steps:

[0038] S1. Collect audio signal samples of breath sounds, and preprocess the audio signal, wherein

[0039] S2. According to the breath sound cycle text information, perform cycle division on the audio signal in step S1 to obtain a breath sound signal with a set cycle;

[0040] S3. Perform data enhancement on the breathing audio signal in the data set by an audio data enhancement method, and extract the acoustic features of the breathing audio signal;

[0041] S4. Using the acoustic features extracted in step S3, constructing a type recognition model to classify and recognize bre...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention relates to the field of audio signal recognition, and in particular, relates to a breathing sound classification method based on deep learning. The method comprises the steps: S1, acquiring an audio signal sample of breathing sound, and preprocessing the audio signal; S2, according to breathing sound period text information, performing period division on the audio signal in the stepS1 to obtain a breathing sound signal of a set period; performing data enhancement on the breathing audio signal in the data set through an audio data enhancement method, and extracting acoustic features of the breathing audio signal; and S3, constructing a type recognition model by using the acoustic features extracted in the step S3 to classify and recognize the breathing sound to obtain a classification and recognition result.

Description

technical field [0001] The present invention relates to the field of audio signal recognition, and more specifically, to a breathing sound classification method based on deep learning. Background technique [0002] Breath sounds contain a lot of physiological information, and are important indicators to reflect respiratory health and respiratory disorders. As a nonlinear and non-stationary random signal, breath sounds contain low-frequency signals that cannot be discerned by the human ear. Digital stethoscopes provide a data source for the study of breath sounds. It is of great research significance to extract useful information including low-frequency signals from the collected breath sounds and use deep learning to automatically diagnose respiratory diseases. [0003] Breath sounds can be roughly divided into 4 categories according to the pitch, main frequency distribution interval, and whether they are continuous, normal sounds, wheezing sounds, snoring sounds, and wet r...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L17/26G10L17/02G10L25/03G10L25/51
CPCG10L17/26G10L17/02G10L25/51G10L25/03
Inventor 赵雪松殷爱茹
Owner NANKAI UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products