16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Sparse Non-Negative Matrix Language Modeling for Skip-Grams

Noam Shazeer, Joris Pelemans, Ciprian Chelba

Google, USA

We present a novel family of language model (LM) estimation techniques named Sparse Non-negative Matrix (SNM) estimation.
    A first set of experiments empirically evaluating these techniques on the One Billion Word Benchmark [3] shows that with skip-gram features SNMLMs are able to match the state-of-the-art recurrent neural network (RNN) LMs; combining the two modeling techniques yields the best known result on the benchmark.
    The computational advantages of SNM over both maximum entropy and RNNLM estimation are probably its main strength, promising an approach that has the same flexibility in combining arbitrary features effectively and yet should scale to very large amounts of data as gracefully as n-gram LMs do.

Full Paper

Bibliographic reference.  Shazeer, Noam / Pelemans, Joris / Chelba, Ciprian (2015): "Sparse non-negative matrix language modeling for skip-grams", In INTERSPEECH-2015, 1428-1432.