An Acoustic Analysis of Child-Child and Child-Robot Interactions for Understanding Engagement during Speech-Controlled Computer Games

Theodora Chaspari, Jill Fain Lehman


Engagement is an essential factor towards successful game design and effective human-computer interaction. We analyze the prosodic patterns of child-child and child-robot pairs playing a language-based computer game. Acoustic features include speech loudness and fundamental frequency. We use a linear mixed-effects model to capture the coordination of acoustic patterns between interactors as well as its relation to annotated engagement levels. Our results indicate that the considered acoustic features are related to engagement levels for both the child-child and child-robot interaction. They further suggest significant association of the prosodic patterns during the child-child scenario, which is moderated by the co-occurring engagement. This acoustic coordination is not present in the child-robot interaction, since the robot’s behavior was not automatically adjusted to the child. These findings are discussed in relation to automatic robot adaptation and provide a foundation for promoting engagement and enhancing rapport during the considered game-based interactions.


DOI: 10.21437/Interspeech.2016-85

Cite as

Chaspari, T., Lehman, J.F. (2016) An Acoustic Analysis of Child-Child and Child-Robot Interactions for Understanding Engagement during Speech-Controlled Computer Games. Proc. Interspeech 2016, 595-599.

Bibtex
@inproceedings{Chaspari+2016,
author={Theodora Chaspari and Jill Fain Lehman},
title={An Acoustic Analysis of Child-Child and Child-Robot Interactions for Understanding Engagement during Speech-Controlled Computer Games},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-85},
url={http://dx.doi.org/10.21437/Interspeech.2016-85},
pages={595--599}
}