Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation

Yerbolat Khassanov, Zhiping Zeng, Van Tung Pham, Haihua Xu, Eng Siong Chng


The neural language models (NLM) achieve strong generalization capability by learning the dense representation of words and using them to estimate probability distribution function. However, learning the representation of rare words is a challenging problem causing the NLM to produce unreliable probability estimates. To address this problem, we propose a method to enrich representations of rare words in pre-trained NLM and consequently improve its probability estimation performance. The proposed method augments the word embedding matrices of pre-trained NLM while keeping other parameters unchanged. Specifically, our method updates the embedding vectors of rare words using embedding vectors of other semantically and syntactically similar words. To evaluate the proposed method, we enrich the rare street names in the pre-trained NLM and use it to rescore 100-best hypotheses output from the Singapore English speech recognition system. The enriched NLM reduces the word error rate by 6% relative and improves the recognition accuracy of the rare words by 16% absolute as compared to the baseline NLM.


 DOI: 10.21437/Interspeech.2019-1858

Cite as: Khassanov, Y., Zeng, Z., Pham, V.T., Xu, H., Chng, E.S. (2019) Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation. Proc. Interspeech 2019, 3505-3509, DOI: 10.21437/Interspeech.2019-1858.


@inproceedings{Khassanov2019,
  author={Yerbolat Khassanov and Zhiping Zeng and Van Tung Pham and Haihua Xu and Eng Siong Chng},
  title={{Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3505--3509},
  doi={10.21437/Interspeech.2019-1858},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1858}
}