INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Pruning Sparse Non-Negative Matrix N-Gram Language Models

Joris Pelemans, Noam Shazeer, Ciprian Chelba

Google, USA

In this paper we present a pruning algorithm and experimental results for our recently proposed Sparse Non-negative Matrix (SNM) family of language models (LMs). We show that when trained with only n-gram features SNMLM pruning based on a mutual information criterion yields the best known pruned model on the One Billion Word Language Model Benchmark, reducing perplexity with 18% and 57% over Katz and Kneser-Ney LMs, respectively. We also present a method for converting an SNMLM to ARPA back-off format which can be readily used in a single-pass decoder for Automatic Speech Recognition.

Full Paper

Bibliographic reference.  Pelemans, Joris / Shazeer, Noam / Chelba, Ciprian (2015): "Pruning sparse non-negative matrix n-gram language models", In INTERSPEECH-2015, 1433-1437.