Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Using Context to Improve Emotion Detection in Spoken Dialog Systems

Jackson Liscombe (1), Giuseppe Riccardi (2), Dilek Hakkani-Tür (2)

(1) Columbia University, USA; (2) AT&T Labs Research, USA

Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the "How May I Help YouSM" spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.

Full Paper

Bibliographic reference.  Liscombe, Jackson / Riccardi, Giuseppe / Hakkani-Tür, Dilek (2005): "Using context to improve emotion detection in spoken dialog systems", In INTERSPEECH-2005, 1845-1848.