Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features

Barlas Oğuz, Issac Alphonso, Shuangyu Chang


Non-negative matrix based language models have been recently introduced [1] as a computationally efficient alternative to other feature-based models such as maximum-entropy models. We present a new entropy based pruning algorithm for this class of language models, which is fast and scalable. We present perplexity and word error rate results and compare these against regular n-gram pruning. We also train models with location and personalization features and report results at various pruning thresholds. We demonstrate that contextual features are helpful over the vanilla model even after pruning to a similar size.


DOI: 10.21437/Interspeech.2016-130

Cite as

Oğuz, B., Alphonso, I., Chang, S. (2016) Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features. Proc. Interspeech 2016, 2328-2332.

Bibtex
@inproceedings{Oğuz+2016,
author={Barlas Oğuz and Issac Alphonso and Shuangyu Chang},
title={Entropy Based Pruning for Non-Negative Matrix Based Language Models with Contextual Features},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-130},
url={http://dx.doi.org/10.21437/Interspeech.2016-130},
pages={2328--2332}
}