ISCA Archive Interspeech 2009
ISCA Archive Interspeech 2009

The INTERSPEECH 2009 emotion challenge

Björn Schuller, Stefan Steidl, Anton Batliner

The last decade has seen a substantial body of literature on the recognition of emotion from speech. However, in comparison to related speech processing tasks such as Automatic Speech and Speaker Recognition, practically no standardised corpora and test-conditions exist to compare performances under exactly the same conditions. Instead a multiplicity of evaluation strategies employed — such as cross-validation or percentage splits without proper instance definition — prevents exact reproducibility. Further, in order to face more realistic scenarios, the community is in desperate need of more spontaneous and less prototypical data. This INTERSPEECH 2009 Emotion Challenge aims at bridging such gaps between excellent research on human emotion recognition from speech and low compatibility of results. The FAU Aibo Emotion Corpus [1] serves as basis with clearly defined test and training partitions incorporating speaker independence and different room acoustics as needed in most real-life settings. This paper introduces the challenge, the corpus, the features, and benchmark results of two popular approaches towards emotion recognition from speech.

doi: 10.21437/Interspeech.2009-103

Cite as: Schuller, B., Steidl, S., Batliner, A. (2009) The INTERSPEECH 2009 emotion challenge. Proc. Interspeech 2009, 312-315, doi: 10.21437/Interspeech.2009-103

  author={Björn Schuller and Stefan Steidl and Anton Batliner},
  title={{The INTERSPEECH 2009 emotion challenge}},
  booktitle={Proc. Interspeech 2009},