Sixth International Conference on Spoken Language Processing
(ICSLP 2000)

Beijing, China
October 16-20, 2000

Multimodal Corpora for Human-Machine Interaction Research

Satoshi Nakamura (1), Keiko Watanuki (2), Toshiyuki Takezawa (1), Satoru Hayamizu (3)

(1) ATR Spoken Language Translation Research Laboratories. Seika-cho, Soraku-gun, Kyoto, Japan
(2) Real World Computing Partnership Multimodal Functions Sharp Laboratories in System Technology Development Center, Sharp Corporation, Mihama-ku, Chiba, Japan
(3) Electrotechnical Laboratories, Tsukuba, Ibaraki, Japan

In recent years human-machine interaction has increased its importance. One approach to an ideal human-machine interaction is develop a multi-modal system behaves like human-beings. This paper introduces an overview on multimodal corpora which are currently developed in Japan for the purpose. The paper describes database of 1)Multi-modal interaction, 2)Audio-visual speech, 3)Spoken dialogue with multiple speakers, 4)Gesture of sign language and 5)Sound scene data in real acoustic environments.


Full Paper

Bibliographic reference.  Nakamura, Satoshi / Watanuki, Keiko / Takezawa, Toshiyuki / Hayamizu, Satoru (2000): "Multimodal corpora for human-machine interaction research", In ICSLP-2000, vol.4, 25-28.