Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks

Karel Beneš, Murali Karthick Baskar, Lukáš Burget


We introduce the Residual Memory Network (RMN) architecture to language modeling. RMN is an architecture of feed-forward neural networks that incorporates residual connections and time-delay connections that allow us to naturally incorporate information from a substantial time context. As this is the first time RMNs are applied for language modeling, we thoroughly investigate their behaviour on the well studied Penn Treebank corpus. We change the model slightly for the needs of language modeling, reducing both its time and memory consumption. Our results show that RMN is a suitable choice for small-sized neural language models: With test perplexity 112.7 and as few as 2.3M parameters, they out-perform both a much larger vanilla RNN (PPL 124, 8M parameters) and a similarly sized LSTM (PPL 115, 2.08M parameters), while being only by less than 3 perplexity points worse than twice as big LSTM.


 DOI: 10.21437/Interspeech.2017-1442

Cite as: Beneš, K., Baskar, M.K., Burget, L. (2017) Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks. Proc. Interspeech 2017, 284-288, DOI: 10.21437/Interspeech.2017-1442.


@inproceedings{Beneš2017,
  author={Karel Beneš and Murali Karthick Baskar and Lukáš Burget},
  title={Residual Memory Networks in Language Modeling: Improving the Reputation of Feed-Forward Networks},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={284--288},
  doi={10.21437/Interspeech.2017-1442},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1442}
}