Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

A depression auxiliary detection method based on acoustic characteristics and sparse mathematics and a classifier

An acoustic feature and auxiliary detection technology, applied in instruments, speech analysis, psychological devices, etc., can solve the problems of lack of objective evaluation indicators, high misjudgment rate, single detection and screening methods, etc.

Active Publication Date: 2018-02-02
NORTHWEST UNIV +1
View PDF7 Cites 35 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

[0003] To sum up, the problems existing in the existing technology are: traditional depression detection methods are based on subjective scales and subjective judgments of clinicians, and there is a large misjudgment rate, and the detection and screening methods are single, lacking effective objective methods. Evaluation index

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • A depression auxiliary detection method based on acoustic characteristics and sparse mathematics and a classifier
  • A depression auxiliary detection method based on acoustic characteristics and sparse mathematics and a classifier
  • A depression auxiliary detection method based on acoustic characteristics and sparse mathematics and a classifier

Examples

Experimental program
Comparison scheme
Effect test

Embodiment 1

[0084] The working conditions of the depression speech recognition system need to provide a quiet environment. Once background noise is introduced, the performance of the recognition system will be affected. Therefore, this embodiment provides a method for enhancing speech quality based on improved spectral subtraction, which specifically includes the following steps :

[0085] Step 1: Assuming that speech is a stationary signal, while noise and speech are additive signals and are not correlated with each other, the noisy speech signal can be expressed as:

[0086] y(n)=s(n)+d(n), 0≤n≤N-1 (1)

[0087] Where s(n) is a pure speech signal, d(n) is a stationary additive Gaussian noise, and y(n) is a noisy speech signal. Represent the noisy speech signal in the frequency domain, where * represents the complex conjugate, so:

[0088] |Y k | 2 =|S k | 2 +|N k | 2 +S k N k * +S k * N k (2)

[0089] Step 2: Assume that the noise is uncorrelated, that is, s(n) and d(n) a...

Embodiment 2

[0100] The embodiment of the present invention extracts the characteristic parameters (fundamental frequency, formant, energy, and short-term average amplitude) of different emotional voices based on the signal enhancement in the first embodiment. Five kinds of statistical feature parameters (maximum value, minimum value, variation range, mean value, variance) are used to record commonly used emotion recognition, so as to reflect the voice characteristics of depressed patients and the differences from the other two types of emotional voices, specifically including the following steps:

[0101] Step 1: Read in the voice data and preprocess it. After detecting the endpoint of the voice data, take out a frame of voice data and add a window, calculate the cepstrum, and then look for the peak near the expected pitch period. If the peak value of the cepstrum exceeds the expected setting If the threshold is determined, the input speech segment is defined as voiced, and the position of...

Embodiment 3

[0116] In the embodiment of the present invention, based on speech recognition and facial emotion recognition, an auxiliary judgment is made on whether suffering from depression, which specifically includes the following steps:

[0117] Step 1: Read in the voice data and preprocess it, and use the method in Embodiment 1 to perform signal enhancement on all voices.

[0118] Step 2: Select the standard 3-layer BP neural network to input the three types of voices of fear, normal and depression respectively in order, and extract 12 eigenvalues ​​of MFCC to form a 12-dimensional feature vector. Therefore, the number of nodes in the input layer of the BP neural network is 12. The number of nodes in the output layer of the meta-network is determined by the number of categories, and the three speech emotions are recognized, so the number of nodes in the output layer of the BP neural network is 3, and the number of nodes in the hidden layer is 6. When training the network, if the input...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention belongs to the technical field of speech processing and image processing and discloses a depression auxiliary detection method based on acoustic characteristics and sparse mathematics and a classifier. The method is characterized by carrying out depression diagnosis based on speech and facial emotion common identification; realizing estimation of a glottis signal through an inverse filter; carrying out global analysis on the speech signals to extract characteristic parameters, and analyzing timing and distribution features of the characteristic parameters to find out rhyme rulesof speeches in different emotions as basis of emotion recognition; using MFCC as a speech signal to be processed for analysis of the characteristic parameters, collecting data in records through a plurality of groups of training data, and establishing a neural network model for diagnosis; through an OMP-based sparse express algorithm, obtaining sparse linear combination of a test sample; and carrying out judgment and classification on facial emotions, and carrying out linear combination on the obtained result and the speech recognition result to obtain final probability for expressing each data. The depression recognition rate is improved greatly, and the cost is low.

Description

technical field [0001] The invention belongs to the technical field of voice processing and image processing, and in particular relates to an auxiliary detection method and classifier for depression based on acoustic features and sparse mathematics. Background technique [0002] Depression is a mental disorder accompanied by abnormal thinking and behavior, which has become a serious public health and social problem worldwide. According to data from the National Institute of Mental Health (NIMH), in 2015, an estimated 16.1 million adults over the age of 18 in the United States experienced at least one major depressive episode in the past year 6.7 percent. Its symptoms are mainly persistent sadness, feeling hopeless, difficulty falling asleep, etc., and severe patients may have suicidal thoughts and suicide attempts. Therefore, one of the best strategies for reducing suicide risk is based on effective detection methods. In recent years, scholars at home and abroad have done...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L25/63G10L25/30G10L25/24G10L25/15G10L25/93G10L15/02G10L15/08G10L21/0208A61B5/16
CPCA61B5/16G10L15/02G10L15/08G10L21/0208G10L25/15G10L25/24G10L25/30G10L25/63G10L25/93G10L2021/02087
Inventor 赵健苏维文姜博刘敏张超路婷婷
Owner NORTHWEST UNIV
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products