This paper describes the use of a weighted mixture of classbased n-gram language models to perform topic adaptation. By using a fixed class n-gram history and variable word-given-class probabilities we obtain large improvements in the performance of the class-based language model, giving it similar accuracy to a word n-gram model, and an associated small but statistically significant improvement when we interpolate with a word-based n-gram language model.
Cite as: Moore, G., Young, S. (2000) Class-based language model adaptation using mixtures of word-class weights. Proc. 6th International Conference on Spoken Language Processing (ICSLP 2000), vol. 4, 512-515, doi: 10.21437/ICSLP.2000-861
@inproceedings{moore00_icslp, author={Gareth Moore and Steve Young}, title={{Class-based language model adaptation using mixtures of word-class weights}}, year=2000, booktitle={Proc. 6th International Conference on Spoken Language Processing (ICSLP 2000)}, pages={vol. 4, 512-515}, doi={10.21437/ICSLP.2000-861} }