Fast Neural Network Language Model Lookups at N-Gram Speeds

Yinghui Huang, Abhinav Sethy, Bhuvana Ramabhadran


Feed forward Neural Network Language Models (NNLM) have shown consistent gains over backoff word n-gram models in a variety of tasks. However, backoff n-gram models still remain dominant in applications with real time decoding requirements as word probabilities can be computed orders of magnitude faster than the NNLM. In this paper, we present a combination of techniques that allows us to speed up the probability computation from a neural net language model to make it comparable to the word n-gram model without any approximations. We present results on state of the art systems for Broadcast news transcription and conversational speech which demonstrate the speed improvements in real time factor and probability computation while retaining the WER gains from NNLM.


 DOI: 10.21437/Interspeech.2017-564

Cite as: Huang, Y., Sethy, A., Ramabhadran, B. (2017) Fast Neural Network Language Model Lookups at N-Gram Speeds. Proc. Interspeech 2017, 274-278, DOI: 10.21437/Interspeech.2017-564.


@inproceedings{Huang2017,
  author={Yinghui Huang and Abhinav Sethy and Bhuvana Ramabhadran},
  title={Fast Neural Network Language Model Lookups at N-Gram Speeds},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={274--278},
  doi={10.21437/Interspeech.2017-564},
  url={http://dx.doi.org/10.21437/Interspeech.2017-564}
}