10th Annual Conference of the International Speech Communication Association

Brighton, United Kingdom
September 6-10, 2009

Evaluating Evaluators: A Case Study in Understanding the Benefits and Pitfalls of Multi-Evaluator Modeling

Emily Mower, Maja J. Matarić, Shrikanth S. Narayanan

University of Southern California, USA

Emotion perception is a complex process, often measured using stimuli presentation experiments that query evaluators for their perceptual ratings of emotional cues. These evaluations contain large amounts of variability both related and unrelated to the evaluated utterances. One approach to handling this variability is to model emotion perception at the individual level. However, the perceptions of specific users may not adequately capture the emotional acoustic properties of an utterance. This problem can be mitigated by the common technique of averaging evaluations from multiple users. We demonstrate that this averaging procedure improves classification performance when compared to classification results from models created using individual-specific evaluations. We also demonstrate that the performance increases are related to the consistency with which evaluators label data. These results suggest that the acoustic properties of emotional speech are better captured using models formed from averaged evaluations rather than from individual-specific evaluations.

Full Paper

Bibliographic reference.  Mower, Emily / Matarić, Maja J. / Narayanan, Shrikanth S. (2009): "Evaluating evaluators: a case study in understanding the benefits and pitfalls of multi-evaluator modeling", In INTERSPEECH-2009, 1583-1586.