Recognition of Depression in Bipolar Disorder: Leveraging Cohort and Person-Specific Knowledge

Soheil Khorram, John Gideon, Melvin McInnis, Emily Mower Provost


Individuals with bipolar disorder typically exhibit changes in the acoustics of their speech. Mobile health systems seek to model these changes to automatically detect and correctly identify current states in an individual and to ultimately predict impending mood episodes. We have developed a program, PRIORI (Predicting Individual Outcomes for Rapid Intervention), that analyzes acoustics of speech as predictors of mood states from mobile smartphone data. Mood prediction systems generally assume that the symptomatology of an individual can be modeled using patterns common in a cohort population due to limitations in the size of available datasets. However, individuals are unique. This paper explores person-level systems that can be developed from the current PRIORI database of an extensive and longitudinal collection composed of two subsets: a smaller labeled portion and a larger unlabeled portion. The person-level system employs the unlabeled portion to extract i-vectors, which characterize single individuals. The labeled portion is then used to train person-level and population-level supervised classifiers, operating on the i-vectors and on speech rhythm statistics, respectively. The unification of these two approaches results in a significant improvement over the baseline system, demonstrating the importance of a multi-level approach to capturing depression symptomatology.


DOI: 10.21437/Interspeech.2016-837

Cite as

Khorram, S., Gideon, J., McInnis, M., Provost, E.M. (2016) Recognition of Depression in Bipolar Disorder: Leveraging Cohort and Person-Specific Knowledge. Proc. Interspeech 2016, 1215-1219.

Bibtex
@inproceedings{Khorram+2016,
author={Soheil Khorram and John Gideon and Melvin McInnis and Emily Mower Provost},
title={Recognition of Depression in Bipolar Disorder: Leveraging Cohort and Person-Specific Knowledge},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-837},
url={http://dx.doi.org/10.21437/Interspeech.2016-837},
pages={1215--1219}
}