In this study, we incorporate automatically obtained system/user performance features into machine learning experiments to detect student emotion in computer tutoring dialogs. Our results show a relative improvement of 2.7% on classification accuracy and 8.08% on Kappa over using standard lexical, prosodic, sequential, and identification features. This level of improvement is comparable to the performance improvement shown in previous studies by applying dialog acts or lexical-/prosodic-/discourse-level contextual features.
Cite as: Ai, H., Litman, D.J., Forbes-Riley, K., Rotaru, M., Tetreault, J., Purandare, A. (2006) Using system and user performance features to improve emotion detection in spoken tutoring dialogs. Proc. Interspeech 2006, paper 1682-Tue1A3O.2, doi: 10.21437/Interspeech.2006-274
@inproceedings{ai06_interspeech, author={Hua Ai and Diane J. Litman and Kate Forbes-Riley and Mihai Rotaru and Joel Tetreault and Amruta Purandare}, title={{Using system and user performance features to improve emotion detection in spoken tutoring dialogs}}, year=2006, booktitle={Proc. Interspeech 2006}, pages={paper 1682-Tue1A3O.2}, doi={10.21437/Interspeech.2006-274} }