Language models (LMs) are often constructed by building component models on multiple text sources to be combined using global, context free interpolation weights. By re-adjusting these weights, LMs may be adapted to a target domain representing a particular genre, epoch or other higher level attributes. A major limitation with this approach is other factors that determine the “usefulness” of sources on a context dependent basis, such as modeling resolution, generalization, topics and styles, are poorly modeled. To overcome this problem, this paper investigates a context dependent form of LM interpolation and test-time adaptation. Depending on the context, a discrete history weighting function is used to dynamically adjust the contribution from component models. In previous research, it was used primarily for LM adaptation. In this paper, a range of schemes to combine context dependent weights obtained from training and test data to improve LM adaptation are proposed. Consistent perplexity and error rate gains of 6% relative were obtained on a state-of-the-art broadcast recognition task.
Cite as: Liu, X., Gales, M.J.F., Woodland, P.C. (2009) Use of contexts in language model interpolation and adaptation. Proc. Interspeech 2009, 360-363, doi: 10.21437/Interspeech.2009-115
@inproceedings{liu09_interspeech, author={X. Liu and M. J. F. Gales and P. C. Woodland}, title={{Use of contexts in language model interpolation and adaptation}}, year=2009, booktitle={Proc. Interspeech 2009}, pages={360--363}, doi={10.21437/Interspeech.2009-115} }