INTERSPEECH 2011
12th Annual Conference of the International Speech Communication Association

Florence, Italy
August 27-31. 2011

Speak4it and the Multimodal Semantic Interpretation System

Michael Johnston (1), Patrick Ehlen (2)

(1) AT&T Labs Research, USA
(2) AT&T Labs, USA

Multimodal interaction allows users to specify commands using combinations of inputs from multiple different modalities. For example, in a local search application, a user might say "gas stations" while simultaneously tracing a route on a touchscreen display. In this demonstration, we describe the extension of our cloud-based speech recognition architecture to a Multimodal Semantic Interpretation System (MSIS) that supports processing of multimodal inputs streamed over HTTP. We illustrate the capabilities of the framework using Speak4it, a deployed mobile local search application supporting combined speech and gesture input. We provide interactive demonstrations of Speak4it on the iPhone and iPad and explain the challenges of supporting true multimodal interaction in a deployed mobile service.

Full Paper

Bibliographic reference.  Johnston, Michael / Ehlen, Patrick (2011): "Speak4it and the multimodal semantic interpretation system", In INTERSPEECH-2011, 3333-3334.