Privacy-Preserving Speech Analytics for Automatic Assessment of Student Collaboration

Nikoletta Bassiou, Andreas Tsiartas, Jennifer Smith, Harry Bratt, Colleen Richey, Elizabeth Shriberg, Cynthia D’Angelo, Nonye Alozie


This work investigates whether nonlexical information from speech can automatically predict the quality of small-group collaborations. Audio was collected from students as they collaborated in groups of three to solve math problems. Experts in education annotated 30-second time windows by hand for collaboration quality. Speech activity features (computed at the group level) and spectral, temporal and prosodic features (extracted at the speaker level) were explored. After the latter were transformed from the speaker level to the group level, features were fused. Results using support vector machines and random forests show that feature fusion yields best classification performance. The corresponding unweighted average F1 measure on a 4-class prediction task ranges between 40% and 50%, significantly higher than chance (12%). Speech activity features alone are strong predictors of collaboration quality, achieving an F1 measure between 35% and 43%. Speaker-based acoustic features alone achieve lower classification performance, but offer value in fusion. These findings illustrate that the approach under study offers promise for future monitoring of group dynamics, and should be attractive for many collaboration activity settings in which privacy is desired.


DOI: 10.21437/Interspeech.2016-1569

Cite as

Bassiou, N., Tsiartas, A., Smith, J., Bratt, H., Richey, C., Shriberg, E., D’Angelo, C., Alozie, N. (2016) Privacy-Preserving Speech Analytics for Automatic Assessment of Student Collaboration. Proc. Interspeech 2016, 888-892.

Bibtex
@inproceedings{Bassiou+2016,
author={Nikoletta Bassiou and Andreas Tsiartas and Jennifer Smith and Harry Bratt and Colleen Richey and Elizabeth Shriberg and Cynthia D’Angelo and Nonye Alozie},
title={Privacy-Preserving Speech Analytics for Automatic Assessment of Student Collaboration},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1569},
url={http://dx.doi.org/10.21437/Interspeech.2016-1569},
pages={888--892}
}