INTERSPEECH 2009
10th Annual Conference of the International Speech Communication Association

Brighton, United Kingdom
September 6-10, 2009

Arousal and Valence Prediction in Spontaneous Emotional Speech: Felt versus Perceived Emotion

Khiet P. Truong (1), David A. van Leeuwen (2), Mark A. Neerincx (2), Franciska M. G. de Jong (1)

(1) University of Twente, The Netherlands
(2) TNO Defence, The Netherlands

In this paper, we describe emotion recognition experiments carried out for spontaneous affective speech with the aim to compare the added value of annotation of felt emotion versus annotation of perceived emotion. Using speech material available in the tno-gaming corpus (a corpus containing audiovisual recordings of people playing videogames), speech-based affect recognizers were developed that can predict Arousal and Valence scalar values. Two types of recognizers were developed in parallel: one trained with felt emotion annotations (generated by the gamers themselves) and one trained with perceived/observed emotion annotations (generated by a group of observers). The experiments showed that, in speech, with the methods and features currently used, observed emotions are easier to predict than felt emotions. The results suggest that recognition performance strongly depends on how and by whom the emotion annotations are carried out.

Full Paper

Bibliographic reference.  Truong, Khiet P. / Leeuwen, David A. van / Neerincx, Mark A. / Jong, Franciska M. G. de (2009): "Arousal and valence prediction in spontaneous emotional speech: felt versus perceived emotion", In INTERSPEECH-2009, 2027-2030.