Interspeech'2005 - Eurospeech
Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the "How May I Help YouSM" spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.
Bibliographic reference. Liscombe, Jackson / Riccardi, Giuseppe / Hakkani-Tür, Dilek (2005): "Using context to improve emotion detection in spoken dialog systems", In INTERSPEECH-2005, 1845-1848.