Improving Transformer-Based End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration

Shigeki Karita, Nelson Enrique Yalta Soplin, Shinji Watanabe, Marc Delcroix, Atsunori Ogawa, Tomohiro Nakatani


The state-of-the-art neural network architecture named Transformer has been used successfully for many sequence-to-sequence transformation tasks. The advantage of this architecture is that it has a fast iteration speed in the training stage because there is no sequential operation as with recurrent neural networks (RNN). However, an RNN is still the best option for end-to-end automatic speech recognition (ASR) tasks in terms of overall training speed (i.e., convergence) and word error rate (WER) because of effective joint training and decoding methods. To realize a faster and more accurate ASR system, we combine Transformer and the advances in RNN-based ASR. In our experiments, we found that the training of Transformer is slower than that of RNN as regards the learning curve and integration with the naive language model (LM) is difficult. To address these problems, we integrate connectionist temporal classification (CTC) with Transformer for joint training and decoding. This approach makes training faster than with RNNs and assists LM integration. Our proposed ASR system realizes significant improvements in various ASR tasks. For example, it reduced the WERs from 11.1% to 4.5% on the Wall Street Journal and from 16.1% to 11.6% on the TED-LIUM by introducing CTC and LM integration into the Transformer baseline.


 DOI: 10.21437/Interspeech.2019-1938

Cite as: Karita, S., Soplin, N.E.Y., Watanabe, S., Delcroix, M., Ogawa, A., Nakatani, T. (2019) Improving Transformer-Based End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration. Proc. Interspeech 2019, 1408-1412, DOI: 10.21437/Interspeech.2019-1938.


@inproceedings{Karita2019,
  author={Shigeki Karita and Nelson Enrique Yalta Soplin and Shinji Watanabe and Marc Delcroix and Atsunori Ogawa and Tomohiro Nakatani},
  title={{Improving Transformer-Based End-to-End Speech Recognition with Connectionist Temporal Classification and Language Model Integration}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1408--1412},
  doi={10.21437/Interspeech.2019-1938},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1938}
}