Engagement Recognition in Spoken Dialogue via Neural Network by Aggregating Different Annotators' Models

Koji Inoue, Divesh Lala, Katsuya Takanashi, Tatsuya Kawahara


This paper addresses engagement recognition based on four multimodal listener behaviors - backchannels, laughing, eye-gaze and head nodding. Engagement is an indicator of how much a user is interested in the current dialogue. Multiple third-party annotators give ground truth labels of engagement in a human-robot interaction corpus. Since perception of engagement is subjective, the annotations are sometimes different between individual annotators. Conventional methods directly use integrated labels, such as those generated through simple majority voting and do not consider each annotator's recognition. We propose a two-step engagement recognition where each annotator's recognition is modeled and the different annotators' models are aggregated to recognize the integrated label. The proposed neural network consists of two parts. The first part corresponds to each annotator's model which is trained with the corresponding labels independently. The second part aggregates the different annotators' models to obtain one integrated label. After each part is pre-trained, the whole network is fine-tuned through back-propagation of prediction errors. Experimental results show that the proposed network outperforms baseline models which directly recognize the integrated label without considering differing annotations.


 DOI: 10.21437/Interspeech.2018-2067

Cite as: Inoue, K., Lala, D., Takanashi, K., Kawahara, T. (2018) Engagement Recognition in Spoken Dialogue via Neural Network by Aggregating Different Annotators' Models. Proc. Interspeech 2018, 616-620, DOI: 10.21437/Interspeech.2018-2067.


@inproceedings{Inoue2018,
  author={Koji Inoue and Divesh Lala and Katsuya Takanashi and Tatsuya Kawahara},
  title={Engagement Recognition in Spoken Dialogue via Neural Network by Aggregating Different Annotators' Models},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={616--620},
  doi={10.21437/Interspeech.2018-2067},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2067}
}