INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Classification of Depression State Based on Articulatory Precision

Brian S. Helfer (1), Thomas F. Quatieri (1), James R. Williamson (1), Daryush D. Mehta (2), Rachelle Horwitz (1), Bea Yu (1)

(1) MIT Lincoln Laboratory, USA
(2) Massachusetts General Hospital, USA

Neurophysiological changes in the brain associated with major depression disorder can disrupt articulatory precision in speech production. Motivated by this observation, we address the hypothesis that articulatory features, as manifested through formant frequency tracks, can help in automatically classifying depression state. Specifically, we investigate the relative importance of vocal tract formant frequencies and their dynamic features from sustained vowels and conversational speech. Using a database consisting of audio from 35 subjects with clinical measures of depression severity, we explore the performance of Gaussian mixture model (GMM) and support vector machine (SVM) classifiers. With only formant frequencies and their dynamics given by velocity and acceleration, we show that depression state can be classified with an optimal sensitivity/specificity/area under the ROC curve of 0.86/0.64/0.70 and 0.77/0.77/0.73 for GMMs and SVMs, respectively. Future work will involve merging our formant-based characterization with vocal source and prosodic features.

Full Paper

Bibliographic reference.  Helfer, Brian S. / Quatieri, Thomas F. / Williamson, James R. / Mehta, Daryush D. / Horwitz, Rachelle / Yu, Bea (2013): "Classification of depression state based on articulatory precision", In INTERSPEECH-2013, 2172-2176.