September 22-25, 1997
Within the framework of a prospective ergonomic approach, we simulated two multimodal user interfaces, in order to study the usability of constrained vs spontaneous speech in a multimodal environment. The first experiment, which served as a reference, gave subjects the opportunity to use speech and gestures freely, while subjects in the second experiment had to comply with multimodal constraints. We first describe the experimental setup and the approach we adopted for designing the artificial command language used in the second experiment. We then present the results of our analysis of the subjects' utterances and gestures, laying emphasis on their implementation of linguistic constraints. The conclusions of the empirical assessment of the usability of this multimodal command language built from a restricted subset of natural language and simple designation gestures is associated with recommendations which may prove useful for improving the usability of oral human-computer interaction in a multimodal environment.
Bibliographic reference. Robbe, Sandrine / Carbonell, Noelle / Valot, Claude (1997): "Towards usable multimodal command languages: definition and ergonomic assessment of constraints on users' spontaneous speech and gestures", In EUROSPEECH-1997, 1655-1658.