Sixth International Conference on Spoken Language Processing
(ICSLP 2000)

Beijing, China
October 16-20, 2000

Using HPSG to Represent Multi-Modal Grammar in Multi-Modal Dialogue

Crusoe Mao, Tony Tuo, Danjun Liu

Intel China Research Centre, Intel China Ltd., Beijing, China

In order to realize their full potential, multimodal systems need to support not just synchronized integration of multiple input modalities, but also a consistent easy-of-using interface to isolate integration strategies from application ad hoc manner. As the range of multi-modal utterances supported is extended, type of input modalities are increasing, utterances being supported from individual modalities are turning to more complicated, it becomes essential to provide a well-understood and generally applicable common meaning representation for multi-modal utterances. This paper presents a fully formalized declarative statement of multi-modal grammar, the expression we use for the grammar representation draws on unification-based approaches to syntax and semantics, such as head-driven phrase structure grammar (HPSG). The works presented here show that our approach supports parsing and interpretation of natural human input distributed across the spatial, temporal, and acoustic dimensions. Integration strategies are stated in a high level HPSG based representation supporting rapid prototyping and iterative development of multi-modal systems.


Full Paper

Bibliographic reference.  Mao, Crusoe / Tuo, Tony / Liu, Danjun (2000): "Using HPSG to represent multi-modal grammar in multi-modal dialogue", In ICSLP-2000, vol.2, 735-738.