![]() |
INTERSPEECH 2011
|
![]() |
Multimodal interaction allows users to specify commands using combinations of inputs from multiple different modalities. For example, in a local search application, a user might say "gas stations" while simultaneously tracing a route on a touchscreen display. In this demonstration, we describe the extension of our cloud-based speech recognition architecture to a Multimodal Semantic Interpretation System (MSIS) that supports processing of multimodal inputs streamed over HTTP. We illustrate the capabilities of the framework using Speak4it, a deployed mobile local search application supporting combined speech and gesture input. We provide interactive demonstrations of Speak4it on the iPhone and iPad and explain the challenges of supporting true multimodal interaction in a deployed mobile service.
Bibliographic reference. Johnston, Michael / Ehlen, Patrick (2011): "Speak4it and the multimodal semantic interpretation system", In INTERSPEECH-2011, 3333-3334.