INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Feed Forward Pre-Training for Recurrent Neural Network Language Models

Siva Reddy Gangireddy, Fergus McInnes, Steve Renals

University of Edinburgh, UK

The recurrent neural network language model (RNNLM) has been demonstrated to consistently reduce perplexities and automatic speech recognition (ASR) word error rates across a variety of domains. In this paper we propose a pre-training method for the RNNLM, by sharing the output weights of a feed forward neural network language model (NNLM) with the RNNLM. This is accomplished by first fine-tuning the weights of the NNLM, which are then used to initialise the output weights of an RNNLM with the same number of hidden units. We have carried out text-based experiments on the Penn Treebank Wall Street Journal data, and ASR experiments on the TED talks data used in the International Workshop on Spoken Language Translation (IWSLT) evaluation campaigns. Across the experiments, we observe small improvements in perplexity and ASR word error rate.

Full Paper

Bibliographic reference.  Gangireddy, Siva Reddy / McInnes, Fergus / Renals, Steve (2014): "Feed forward pre-training for recurrent neural network language models", In INTERSPEECH-2014, 2620-2624.