EUROSPEECH '97
5th European Conference on Speech Communication and Technology

Rhodes, Greece
September 22-25, 1997


Structure and Performance of a Dependency Language Model

Ciprian Chelba (1), David Engle (2), Frederick Jelinek (1), Victor Jimenez (3), Sanjeev Khudanpur (1), Lidia Mangu (1), Harry Printz (4), Eric Ristad (5), Ronald Rosenfeld (6), Andreas Stolcke (7), Dekai Wu (8)

(1) Johns Hopkins University Baltimore, MD, USA
(2) Department of Defense Fort Meade, MD, USA
(3) U Politecnica de Valencia, Spain
(4) IBM Watson Research Center, Yorktown Heights, NY, USA
(5) Princeton University, Princeton, NJ, USA
(6) Carnegie Mellon Pittsburgh, PA, USA
(7) SRI International Menlo Park, CA, USA
(8) Hong Kong Tech University, Hong Kong, China

We present a maximum entropy language model that incorporates both syntax and semantics via a dependency grammar. Such a grammar expresses the relations between words by a directed graph. Because the edges of this graph may connect words that are arbitrarily far apart in a sentence, this technique can incorporate the predictive power of words that lie outside of bigram or trigram range. We have built several simple dependency models, as we call them, and tested them in a speech recognition experiment. We report experimental results for these models here, including one that has a small but statistically significant advantage (p<.02) over a bigram language model.

Full Paper

Bibliographic reference.  Chelba, Ciprian / Engle, David / Jelinek, Frederick / Jimenez, Victor / Khudanpur, Sanjeev / Mangu, Lidia / Printz, Harry / Ristad, Eric / Rosenfeld, Ronald / Stolcke, Andreas / Wu, Dekai (1997): "Structure and performance of a dependency language model", In EUROSPEECH-1997, 2775-2778.