Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Training a Maximum Entropy Model for Surface Realization

Hua Cheng (1), Fuliang Weng (2), Niti Hantaweepant (1), Lawrence Cavedon (1), Stanley Peters (1)

(1) Stanford University, USA; (2) Robert Bosch Corp., USA

Most existing statistical surface realizers either make use of handcrafted grammars to provide coverage or are tuned to specific applications. This paper describes an initial effort toward building a statistical surface realization model that provides both precision and coverage. We trained a Maximum Entropy model that given a predicate-argument semantic representation, predicts the surface form for realizing a semantic concept and the ordering of sibling semantic concepts and their parent, on the Penn TreeBank and Proposition Bank corpora. Initial results have shown that the precisions for predicting surface forms and orderings reached 80% and 90% respectively, on a held-out part of Penn TreeBank. We use the model to generate sentences from our domain representations. We are in the process of evaluating the model on a corpus collected for our in-car applications.

Full Paper

Bibliographic reference.  Cheng, Hua / Weng, Fuliang / Hantaweepant, Niti / Cavedon, Lawrence / Peters, Stanley (2005): "Training a maximum entropy model for surface realization", In INTERSPEECH-2005, 1953-1956.