Looking for breakthrough ideas for innovation challenges? Try Patsnap Eureka!

Method for estimating severity of dysarthria based on deep audio features

A dysarthria and severity technology, applied in the field of dysarthria severity estimation based on deep audio features, can solve the problems of invasive diagnostic methods, expensive instruments, and patients with dysarthria not receiving timely treatment.

Inactive Publication Date: 2018-09-28
SOUTH CHINA UNIV OF TECH
View PDF0 Cites 6 Cited by
  • Summary
  • Abstract
  • Description
  • Claims
  • Application Information

AI Technical Summary

Problems solved by technology

However, due to the shortage of professionals in related fields in our country, a considerable number of patients with dysarthria cannot receive timely treatment
In addition, the assessment method is highly subjective, and different experts are likely to make different assessments of its severity
Existing inspection instruments and methods, such as fiberoptic palatal and throat endoscopy, television fluorescein radiography, laryngeal dynamics, tongue pressure sensor, etc., although the evaluation is more objective and accurate, these instruments are generally expensive and Some diagnostic methods are invasive and cause severe discomfort to the patient, who are very reluctant to cooperate with the diagnosis

Method used

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
View more

Image

Smart Image Click on the blue labels to locate them in the text.
Viewing Examples
Smart Image
  • Method for estimating severity of dysarthria based on deep audio features
  • Method for estimating severity of dysarthria based on deep audio features
  • Method for estimating severity of dysarthria based on deep audio features

Examples

Experimental program
Comparison scheme
Effect test

Embodiment

[0075] Such as figure 1 As shown, a method for estimating the severity of dysarthria based on deep audio features is characterized in that it comprises the following steps:

[0076] S1, carry out preprocessing to speech data, extract acoustic feature, described acoustic feature comprises linear prediction coefficient, fundamental frequency, fundamental frequency perturbation, amplitude, amplitude perturbation, zero-crossing rate and formant, obtains speech data characteristic matrix F= [Linear prediction coefficient, fundamental frequency, fundamental frequency perturbation, amplitude, amplitude perturbation, zero-crossing rate, formant].

[0077] Preferably, the acoustic feature extraction in step S1 specifically includes the following steps:

[0078] S1.1. Pre-emphasis: the transfer function is H(z)=1-αz -1 The digital filter of is used to filter the input speech, where α is a constant coefficient with a value range of [0.9,1];

[0079] S1.2. Framing: Divide the pre-empha...

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

PUM

No PUM Login to View More

Abstract

The invention discloses a method for estimating severity of dysarthria based on deep audio features. The method comprises steps that acoustic features are extracted; the acoustic features are inputtedto the deep neural network with a bottleneck layer, and depth audio features are extracted from the bottleneck layer; the depth audio features are taken as input, and the Baum-Welch algorithm is utilized to train hidden Markov models; and lastly, the depth audio features of test voice samples are sequentially inputted into the four trained hidden Markov models, four output probabilities are obtained through utilizing the Viterbi algorithm, and the category corresponding to the model with the highest output probability is the dysarthria severity, namely, the decision result. The method is advantaged in that the depth audio features are deep transformation features, compared with traditional acoustic features, feature difference of the dysarthria voice can be more effectively described, andthe more excellent effect in estimating the dysarthria severity can be obtained.

Description

technical field [0001] The invention relates to speech processing and deep learning technology, in particular to a method for estimating the severity of dysarthria based on deep audio features. Background technique [0002] Dysarthria is the most common type of speech dysfunction in children. It often manifests as unclear articulation. It can be subdivided into omitted sounds, substitute sounds, distorted sounds and superfluous sounds, all of which affect normal speech communication and often lead to The other party cannot understand what the patient wishes to express. However, because it does not bring obvious appearance defects or physical pain to the patient, this condition is often not detected in time, thereby delaying the best correction time. As children grow older, the time and cost of dysarthria rehabilitation can rapidly increase. Therefore, timely detection of whether children have dysarthria is of great significance for the rehabilitation of children. At prese...

Claims

the structure of the environmentally friendly knitted fabric provided by the present invention; figure 2 Flow chart of the yarn wrapping machine for environmentally friendly knitted fabrics and storage devices; image 3 Is the parameter map of the yarn covering machine
Login to View More

Application Information

Patent Timeline
no application Login to View More
IPC IPC(8): G10L25/66G10L25/30G10L25/27G10L25/03G10L25/12
CPCG10L25/66G10L25/03G10L25/12G10L25/27G10L25/30
Inventor 李鹏乾李艳雄李锦彬
Owner SOUTH CHINA UNIV OF TECH
Who we serve
  • R&D Engineer
  • R&D Manager
  • IP Professional
Why Patsnap Eureka
  • Industry Leading Data Capabilities
  • Powerful AI technology
  • Patent DNA Extraction
Social media
Patsnap Eureka Blog
Learn More
PatSnap group products