Unsupervised Adaptation of Recurrent Neural Network Language Models

Siva Reddy Gangireddy, Pawel Swietojanski, Peter Bell, Steve Renals


Recurrent neural network language models (RNNLMs) have been shown to consistently improve Word Error Rates (WERs) of large vocabulary speech recognition systems employing n-gram LMs. In this paper we investigate supervised and unsupervised discriminative adaptation of RNNLMs in a broadcast transcription task to target domains defined by either genre or show. We have explored two approaches based on (1) scaling forward-propagated hidden activations (Learning Hidden Unit Contributions (LHUC) technique) and (2) direct fine-tuning of the parameters of the whole RNNLM. To investigate the effectiveness of the proposed methods we carry out experiments on multi-genre broadcast (MGB) data following the MGB-2015 challenge protocol. We observe small but significant improvements in WER compared to a strong unadapted RNNLM model.


DOI: 10.21437/Interspeech.2016-1342

Cite as

Gangireddy, S.R., Swietojanski, P., Bell, P., Renals, S. (2016) Unsupervised Adaptation of Recurrent Neural Network Language Models. Proc. Interspeech 2016, 2333-2337.

Bibtex
@inproceedings{Gangireddy+2016,
author={Siva Reddy Gangireddy and Pawel Swietojanski and Peter Bell and Steve Renals},
title={Unsupervised Adaptation of Recurrent Neural Network Language Models},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1342},
url={http://dx.doi.org/10.21437/Interspeech.2016-1342},
pages={2333--2337}
}