Cold Fusion: Training Seq2Seq Models Together with Language Models

Anuroop Sriram, Heewoo Jun, Sanjeev Satheesh, Adam Coates


Sequence-to-sequence (Seq2Seq) models with attention have excelled at tasks which involve generating natural language sentences such as machine translation, image captioning and speech recognition. Performance has further been improved by leveraging unlabeled data, often in the form of a language model. In this work, we present the Cold Fusion method, which leverages a pre-trained language model during training and show its effectiveness on the speech recognition task. We show that Seq2Seq models with Cold Fusion are able to better utilize language information enjoying i) faster convergence and better generalization and ii) almost complete transfer to a new domain while using less than 10% of the labeled training data.


 DOI: 10.21437/Interspeech.2018-1392

Cite as: Sriram, A., Jun, H., Satheesh, S., Coates, A. (2018) Cold Fusion: Training Seq2Seq Models Together with Language Models. Proc. Interspeech 2018, 387-391, DOI: 10.21437/Interspeech.2018-1392.


@inproceedings{Sriram2018,
  author={Anuroop Sriram and Heewoo Jun and Sanjeev Satheesh and Adam Coates},
  title={Cold Fusion: Training Seq2Seq Models Together with Language Models},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={387--391},
  doi={10.21437/Interspeech.2018-1392},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1392}
}