9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

Context Dependent Language Model Adaptation

X. Liu, M. J. F. Gales, P. C. Woodland

University of Cambridge, UK

Language models (LMs) are often constructed by building multiple component LMs that are combined using interpolation weights. By tuning these interpolation weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. In this work, improved LM adaptation is achieved by introducing context dependent interpolation weights. An important part of this new approach is obtaining robust estimation. Two schemes for this are described. The first is based on MAP estimation, where either global interpolation weights are used as priors, or context dependent interpolation priors obtained from the training data. The second scheme uses class based contexts to determine the interpolation weights. Both schemes are evaluated using unsupervised LM adaptation on a Mandarin broadcast transcription task. Consistent gains in perplexity using context dependent, rather than global, weights are observed as well as reductions in character error rate.

Full Paper

Bibliographic reference.  Liu, X. / Gales, M. J. F. / Woodland, P. C. (2008): "Context dependent language model adaptation", In INTERSPEECH-2008, 837-840.