We present a study on the effect of reverberation on acousticlinguistic recognition of non-prototypical emotions during childrobot interaction. Investigating the well-defined Interspeech 2009 Emotion Challenge task of recognizing negative emotions in children's speech, we focus on the impact of artificial and real reverberation conditions on the quality of linguistic features and on emotion recognition accuracy. To maintain acceptable recognition performance of both, spoken content and affective state, we consider matched and multi-condition training and apply our novel multi-stream automatic speech recognition system which outperforms conventional Hidden Markov Modeling. Depending on the acoustic condition, we obtain unweighted emotion recognition accuracies of between 65.4% and 70.3% applying our multi-stream system in combination with the SimpleLogistic algorithm for joint acoustic-linguistic analysis.
Bibliographic reference. Wöllmer, Martin / Weninger, Felix / Steidl, Stefan / Batliner, Anton / Schuller, Björn (2011): "Speech-based non-prototypical affect recognition for child-robot interaction in reverberated environments", In INTERSPEECH-2011, 3113-3116.