Dysarthric Speech Classification Using Glottal Features Computed from Non-words, Words and Sentences

Narendra N P, Paavo Alku


Dysarthria is a neuro-motor disorder resulting from the disruption of normal activity in speech production leading to slow, slurred and imprecise (low intelligible) speech. Automatic classification of dysarthria from speech can be used as a potential clinical tool in medical treatment. This paper examines the effectiveness of glottal source parameters in dysarthric speech classification from three categories of speech signals, namely non-words, words and sentences. In addition to the glottal parameters, two sets of acoustic parameters extracted by the openSMILE toolkit are used as baseline features. A dysarthric speech classification system is proposed by training support vector machines (SVMs) using features extracted from speech utterances and their labels indicating dysarthria/healthy. Classification accuracy results indicate that the glottal parameters contain discriminating information required for the identification of dysarthria. Additionally, the complementary nature of the glottal parameters is demonstrated when these parameters, in combination with the openSMILE-based acoustic features, result in improved classification accuracy. Analysis of classification accuracies of the glottal and openSMILE features for non-words, words and sentences is carried out. Results indicate that in terms of classification accuracy the word level is best suited in identifying the presence of dysarthria.


 DOI: 10.21437/Interspeech.2018-1059

Cite as: N P, N., Alku, P. (2018) Dysarthric Speech Classification Using Glottal Features Computed from Non-words, Words and Sentences. Proc. Interspeech 2018, 3403-3407, DOI: 10.21437/Interspeech.2018-1059.


@inproceedings{N P2018,
  author={Narendra {N P} and Paavo Alku},
  title={Dysarthric Speech Classification Using Glottal Features Computed from Non-words, Words and Sentences},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3403--3407},
  doi={10.21437/Interspeech.2018-1059},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1059}
}