Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

An Architecture for Seamless Access to Distributed Multimodal Services

David Pearce (1), Jonathan Engelsma (2), James Ferrans (2), John Johnson (2)

(1) Motorola Labs, UK; (2) Motorola Labs, USA

In this paper we present a standards-based architecture that enables ubiquitous access to distributed speech and multimodal services. Motorola's "Seamless Mobility" initiatives are focused on giving people continuity of services regardless of context, location, device, or type of network connectivity. Speech technology is a key aspect of seamless mobility: not only does it make it easier to enter information on small devices, but it allows devices to be used in contexts where eyes and hands cannot be used. But the visual modality comprised of keypad input and display output is also vital: speech input can't be used in very noisy environments or where privacy is an issue, and speech output is difficult to remember. Multimodal systems combining visual, speech, and other modalities are therefore crucial for seamless mobility. The architecture depends on standard VoIP protocols such as SIP and RTP. It also uses standard web languages for voice dialogs (VoiceXML) and visual dialogs (XHTML, J2ME). And to provide very high performance speech recognition we use the Distributed Speech Recognition (DSR) standards. The result is a responsive system that places minimal demands on the device and maximally leverages existing mobile content ecosystems.

Full Paper

Bibliographic reference.  Pearce, David / Engelsma, Jonathan / Ferrans, James / Johnson, John (2005): "An architecture for seamless access to distributed multimodal services", In INTERSPEECH-2005, 2845-2848.