In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the tno-gaming corpus (a corpus containing audiovisual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were developed in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out.
Bibliographic reference. Truong, Khiet P. / Leeuwen, David A. van / Neerincx, Mark A. / Jong, Franciska M. G. de (2009): "Arousal and valence prediction in spontaneous emotional speech: felt versus perceived emotion", In INTERSPEECH-2009, 2027-2030.