Sixth European Conference on Speech Communication and Technology
(EUROSPEECH'99)

Budapest, Hungary
September 5-9, 1999

Multi-person Conversation via Multi-modal Interface - A Robot who Communicate with Multi-user -

Yosuke Matsusaka, Tsuyoshi Tojo, Sentaro Kubota, Kenji Furukawa, Daisuke Tamiya, Keisuke Hayata, Yuichiro Nakano, Tetsunori Kobayashi

School of Science and Engineering, Waseda University, Shinjuku-ku, Tokyo, Japan

This paper describes a robot who converses with multi-person using his multi-modal interface. The multi-person conversation includes many new problems, which are not cared in the conventional one-to-one conversation: such as information ow problems (recognizing who is speaking and to whom he is speaking / appealing to whom the system is speaking), space information sharing problem and turn holder estimation problem (estimating who is the next speaker). We solved these problems by utilizing multi-modal interface: face direction recognition, gesture recognition, sound direction recognition, speech recogni tion and gestural expression. The systematic combination of these functions realized human friendly multi-person conversation system.


Full Paper (PDF)   Gnu-Zipped Postscript

Bibliographic reference.  Matsusaka, Yosuke / Tojo, Tsuyoshi / Kubota, Sentaro / Furukawa, Kenji / Tamiya, Daisuke / Hayata, Keisuke / Nakano, Yuichiro / Kobayashi, Tetsunori (1999): "Multi-person conversation via multi-modal interface - a robot who communicate with multi-user -", In EUROSPEECH'99, 1723-1726.