Proposal of a Multimodal Framework for Generating Robot’s Spontaneous Attention Directions and Nods in Group Discussion

Hung-Hsuan Huang, Seiya Kimura, Kazuhiro Kuwabara, Toyoaki Nishida


Our ongoing project is aiming to build a robot that can participate group discussion, so that its users can repeatedly practice group discussion with it. In this paper, we propose a multimodal framework to incorporate the modules to generate spontaneous head movements, shifts of attention focus and nodding of the robot. The generation models are derived from human-human group discussion data corpus with support vector classifiers. Dedicated models are developed according to conversation situations: when the robot is speaking, when the robot is listening to other participants, and when no participant is speaking. Low-level verbal and non-verbal (speech turn, prosody, face direction, and head activities) features extracted from the participants other than the focused one (the robot) are adopted in the learning process.


 DOI: 10.21437/AI-MHRI.2018-4

Cite as: Huang, H., Kimura, S., Kuwabara, K., Nishida, T. (2018) Proposal of a Multimodal Framework for Generating Robot’s Spontaneous Attention Directions and Nods in Group Discussion. Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction, 15-18, DOI: 10.21437/AI-MHRI.2018-4.


@inproceedings{Huang2018,
  author={Hung-Hsuan Huang and Seiya Kimura and Kazuhiro Kuwabara and Toyoaki Nishida},
  title={Proposal of a Multimodal Framework for Generating Robot’s Spontaneous Attention Directions and Nods in Group Discussion},
  year=2018,
  booktitle={Proc. FAIM/ISCA Workshop on Artificial Intelligence for Multimodal Human Robot Interaction},
  pages={15--18},
  doi={10.21437/AI-MHRI.2018-4},
  url={http://dx.doi.org/10.21437/AI-MHRI.2018-4}
}