Third International Conference on Spoken Language Processing (ICSLP 94)

Yokohama, Japan
September 18-22, 1994

Evaluation of Multimodal Interface Using Spoken Language and Pointing Gesture on Interior Design System

Haru Ando, Yoshinori Kitahara, Nobuo Hataoka

Central Research Laboratory, Hitachi, Ltd., Tokyo, Japan

This paper describes evaluation results of a multimodal interface using speech and pointing gestures. We have developed an "Interior Design System" as a prototype for evaluating multimodal interfaces. Experiments assess the effectiveness of the proposed multimodal interface and investigate desirable specifications of the interface. These experiments have been performed by two different conditions, first on the prototype which has a speech input processing unit, and second by a "Wizard of OZ" method. Through these experiments, we have compared multimodal interfaces with unimodal interfaces, and compared command utterances with sentence utterances to check the best means for speech input by users. From the results, we have confirmed that the proposed multimodal interface is effective, and that command utterances are better than sentence utterances on the condition of using the current speech recognition technology.

Full Paper

Bibliographic reference.  Ando, Haru / Kitahara, Yoshinori / Hataoka, Nobuo (1994): "Evaluation of multimodal interface using spoken language and pointing gesture on interior design system", In ICSLP-1994, 567-570.