INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Exploiting the Succeeding Words in Recurrent Neural Network Language Models

Yangyang Shi (1), Martha Larson (1), Pascal Wiggers (2), Catholijn M. Jonker (1)

(1) Technische Universiteit Delft, The Netherlands
(2) Hogeschool van Amsterdam, The Netherlands

In automatic speech recognition, conventional language models recognize the current word using only information from preceding words. Recently, Recurrent Neural Network Language Models (RNNLMs) have drawn increased research attention because of their ability to outperform conventional n-gram language models. The superiority of RNNLMs is based in their ability to capture longdistance word dependencies. RNNLMs are, in practice, applied in an N-best rescoring framework, which offers new possibilities for information integration. In particular, it becomes interesting to extend the ability of RNNLMs to capture long distance information by also allowing them to exploit information from succeeding words during the rescoring process. This paper proposes three approaches for exploiting succeeding word information in RNNLMs. The first is a forward-backward model that combines RNNLMs exploiting preceding and succeeding words. The second is an extension of a Maximum Entropy RNNLM (RNNME) that incorporates succeeding word information. The third is an approach that combines language models using two-pass alternating rescoring. Experimental results demonstrate the ability of succeeding word information to improve RNNLM performance, both in terms of perplexity and Word Error Rate (WER). The best performance is achieved by a combined model that exploits the three words succeeding the current word.

Full Paper

Bibliographic reference.  Shi, Yangyang / Larson, Martha / Wiggers, Pascal / Jonker, Catholijn M. (2013): "Exploiting the succeeding words in recurrent neural network language models", In INTERSPEECH-2013, 632-636.