International Workshop on Spoken Language Translation (IWSLT) 2011

San Francisco, CA, USA
December 8-9, 2011

Lexicon Models for Hierarchical Phrase-Based Machine Translation

Matthias Huck, Saab Mansour, Simon Wiesler, Hermann Ney

Human Language Technology and Pattern Recognition Group, RWTH Aachen University, Aachen, Germany

In this paper, we investigate lexicon models for hierarchical phrase-based statistical machine translation. We study five types of lexicon models: a model which is extracted from word-aligned training data and - given the word alignment matrix - relies on pure relative frequencies [1]; the IBM model 1 lexicon [2]; a regularized version of IBM model 1; a triplet lexicon model variant [3]; and a discriminatively trained word lexicon model [4]. We explore source-to-target models with phrase-level as well as sentence-level scoring and target-to-source models with scoring on phrase level only. For the first two types of lexicon models, we compare several scoring variants. All models are used during search, i.e. they are incorporated directly into the log-linear model combination of the decoder.
  Phrase table smoothing with triplet lexicon models and with discriminative word lexicons are novel contributions. We also propose a new regularization technique for IBM model 1 by means of the Kullback-Leibler divergence with the empirical unigram distribution as regularization term.
  Experiments are carried out on the large-scale NIST Chinese-to-English translation task and on the English-to-French and Arabic-to-English IWSLT TED tasks. For Chinese-to-English and English-to-French, we obtain the best results by using the discriminative word lexicon to smooth our phrase tables.

Full Paper

Bibliographic reference.  Huck, Matthias / Mansour, Saab / Wiesler, Simon / Ney, Hermann (2011): "Lexicon models for hierarchical phrase-based machine translation", In IWSLT-2011, 191-198.