Word-Phrase-Entity Recurrent Neural Networks for Language Modeling

Michael Levit, Sarangarajan Parthasarathy, Shuangyu Chang


The recently introduced framework of Word-Phrase-Entity language modeling is applied to Recurrent Neural Networks and leads to similar improvements as reported for n-gram language models. In the proposed architecture, RNN LMs do not operate in terms of lexical items (words), but consume sequences of tokens that could be words, word phrases or classes such as named entities, with the optimal representation for a particular input sentence determined in an iterative manner. We show how auxiliary techniques previously described for n-gram WPE language models, such as token-level interpolation and personalization, can also be realized with recurrent networks and lead to similar perplexity improvements.


DOI: 10.21437/Interspeech.2016-44

Cite as

Levit, M., Parthasarathy, S., Chang, S. (2016) Word-Phrase-Entity Recurrent Neural Networks for Language Modeling. Proc. Interspeech 2016, 3514-3518.

Bibtex
@inproceedings{Levit+2016,
author={Michael Levit and Sarangarajan Parthasarathy and Shuangyu Chang},
title={Word-Phrase-Entity Recurrent Neural Networks for Language Modeling},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-44},
url={http://dx.doi.org/10.21437/Interspeech.2016-44},
pages={3514--3518}
}