ISCA Archive Interspeech 2005
ISCA Archive Interspeech 2005

Using context to improve emotion detection in spoken dialog systems

Jackson Liscombe, Giuseppe Riccardi, Dilek Hakkani-Tür

Most research that explores the emotional state of users of spoken dialog systems does not fully utilize the contextual nature that the dialog structure provides. This paper reports results of machine learning experiments designed to automatically classify the emotional state of user turns using a corpus of 5,690 dialogs collected with the "How May I Help YouSM" spoken dialog system. We show that augmenting standard lexical and prosodic features with contextual features that exploit the structure of spoken dialog and track user state increases classification accuracy by 2.6%.


doi: 10.21437/Interspeech.2005-583

Cite as: Liscombe, J., Riccardi, G., Hakkani-Tür, D. (2005) Using context to improve emotion detection in spoken dialog systems. Proc. Interspeech 2005, 1845-1848, doi: 10.21437/Interspeech.2005-583

@inproceedings{liscombe05b_interspeech,
  author={Jackson Liscombe and Giuseppe Riccardi and Dilek Hakkani-Tür},
  title={{Using context to improve emotion detection in spoken dialog systems}},
  year=2005,
  booktitle={Proc. Interspeech 2005},
  pages={1845--1848},
  doi={10.21437/Interspeech.2005-583}
}