Very Deep Self-Attention Networks for End-to-End Speech Recognition

Ngoc-Quan Pham, Thai-Son Nguyen, Jan Niehues, Markus Müller, Alex Waibel


Recently, end-to-end sequence-to-sequence models for speech recognition have gained significant interest in the research community. While previous architecture choices revolve around time-delay neural networks (TDNN) and long short-term memory (LSTM) recurrent neural networks, we propose to use self-attention via the Transformer architecture as an alternative. Our analysis shows that deep Transformer networks with high learning capacity are able to exceed performance from previous end-to-end approaches and even match the conventional hybrid systems. Moreover, we trained very deep models with up to 48 Transformer layers for both encoder and decoders combined with stochastic residual connections, which greatly improve generalizability and training efficiency. The resulting models outperform all previous end-to-end ASR approaches on the Switchboard benchmark. An ensemble of these models achieve 9.9% and 17.7% WER on Switchboard and CallHome test sets respectively. This finding brings our end-to-end models to competitive levels with previous hybrid systems. Further, with model ensembling the Transformers can outperform certain hybrid systems, which are more complicated in terms of both structure and training procedure.


 DOI: 10.21437/Interspeech.2019-2702

Cite as: Pham, N., Nguyen, T., Niehues, J., Müller, M., Waibel, A. (2019) Very Deep Self-Attention Networks for End-to-End Speech Recognition. Proc. Interspeech 2019, 66-70, DOI: 10.21437/Interspeech.2019-2702.


@inproceedings{Pham2019,
  author={Ngoc-Quan Pham and Thai-Son Nguyen and Jan Niehues and Markus Müller and Alex Waibel},
  title={{Very Deep Self-Attention Networks for End-to-End Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={66--70},
  doi={10.21437/Interspeech.2019-2702},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2702}
}