We investigate turn-taking behaviors in conversations in poster sessions. While the poster presenter holds most of the turns during sessions, the audience's utterances are more important and should not be missed. In this paper, therefore, prediction of turn-taking by the audience is addressed. It is classified into two sub-tasks: prediction of speaker change and prediction of the next speaker. We made analysis on eye-gaze information and its relationship with turn-taking, introducing joint eye-gaze events by the presenter and audience. We also parameterize backchannel patterns of the audience. As a result of machine learning with these features, it is found that combination of prosodic features of the presenter and the joint eye-gaze features is effective for predicting speaker change, while eye-gaze duration and backchannels preceding the speaker change are useful for predicting the next speaker among the audience.
Index Terms: multi-party interaction, turn-taking, prosody, eye-gaze
Bibliographic reference. Kawahara, Tatsuya / Iwatate, Takuma / Takanashi, Katsuya (2012): "Prediction of turn-taking by combining prosodic and eye-gaze information in poster conversations", In INTERSPEECH-2012, 727-730.