7th International Conference on Spoken Language Processing
September 16-20, 2002
Multimodal interfaces are designed with a focus on flexibility, although very few multimodal systems currently are capable of adapting to major sources of user or environmental variation. The development of adaptive multimodal processing techniques will require empirical guidance on modeling key aspects of individual differences. In the present study, we collected data from 24 7-to-10- year-old children as they interacted using speech and pen input with an educational software prototype. A comprehensive analysis of children’s multimodal integration patterns revealed that they were classifiable as either simultaneous or sequential integrators, although they more often integrated signals simultaneously than adults. During their sequential constructions, intermodal lags also ranged faster than those of adult users. The high degree of consistency and early predictability of children’s integration patterns were similar to previously reported adult data. These results have implications for the development of temporal thresholds and adaptive multimodal processing strategies for children’s applications. The long-term goal of this research is life-span modeling of users’ integration and synchronization patterns, which will be needed to design future high-performance adaptive multimodal systems.
Bibliographic reference. Xiao, Benfang / Girand, Cynthia / Oviatt, Sharon (2002): "Multimodal integration patterns in children", In ICSLP-2002, 629-632.