Depression detection using vocal biomarkers is a highly researched area. Articulatory coordination features (ACFs) are developed based on the changes in neuromotor coordination due to psychomotor slowing, a key feature of Major Depressive Disorder. However findings of existing studies are mostly validated on a single database which limits the generalizability of results. Variability across different depression databases adversely affects the results in cross corpus evaluations (CCEs). We propose to develop a generalized classifier for depression detection using a dilated Convolutional Neural Network which is trained on ACFs extracted from two depression databases. We show that ACFs derived from Vocal Tract Variables (TVs) show promise as a robust set of features for depression detection. Our model achieves relative accuracy improvements of ~10% compared to CCEs performed on models trained on a single database. We extend the study to show that fusing TVs and Mel-Frequency Cepstral Coefficients can further improve the performance of this classifier.
Cite as: Seneviratne, N., Espy-Wilson, C. (2021) Generalized Dilated CNN Models for Depression Detection Using Inverted Vocal Tract Variables. Proc. Interspeech 2021, 4513-4517, doi: 10.21437/Interspeech.2021-1960
@inproceedings{seneviratne21b_interspeech, author={Nadee Seneviratne and Carol Espy-Wilson}, title={{Generalized Dilated CNN Models for Depression Detection Using Inverted Vocal Tract Variables}}, year=2021, booktitle={Proc. Interspeech 2021}, pages={4513--4517}, doi={10.21437/Interspeech.2021-1960} }