Emotional Speech of Mentally and Physically Disabled Individuals: Introducing the EmotAsS Database and First Findings

Simone Hantke, Hesam Sagha, Nicholas Cummins, Björn Schuller


The automatic recognition of emotion from speech is a mature research field with a large number of publicly available corpora. However, to the best of the authors knowledge, none of these datasets consist solely of emotional speech samples from individuals with mental, neurological and/or physical disabilities. Yet, such individuals could benefit from speech-based assistive technologies to enhance their communication with their environment and to manage their daily work process. With the aim of advancing these technologies, we fill this void in emotional speech resources by introducing the EmotAsS (Emotional Sensitivity Assistance System for People with Disabilities) corpus consisting of spontaneous emotional German speech data recorded from 17 mentally, neurologically and/or physically disabled participants in their daily work environment, resulting in just under 11 hours of total speech time and featuring approximately 12.7 k utterances after segmentation. Transcription was performed and labelling was carried out in seven emotional categories, as well as for the intelligibility of the speaker. We present a set of baseline results, based on using standard acoustic and linguistic features, for arousal and valence emotion recognition.


 DOI: 10.21437/Interspeech.2017-409

Cite as: Hantke, S., Sagha, H., Cummins, N., Schuller, B. (2017) Emotional Speech of Mentally and Physically Disabled Individuals: Introducing the EmotAsS Database and First Findings. Proc. Interspeech 2017, 3137-3141, DOI: 10.21437/Interspeech.2017-409.


@inproceedings{Hantke2017,
  author={Simone Hantke and Hesam Sagha and Nicholas Cummins and Björn Schuller},
  title={Emotional Speech of Mentally and Physically Disabled Individuals: Introducing the EmotAsS Database and First Findings},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3137--3141},
  doi={10.21437/Interspeech.2017-409},
  url={http://dx.doi.org/10.21437/Interspeech.2017-409}
}