CTC-Synchronous Training for Monotonic Attention Model

Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara


Monotonic chunkwise attention (MoChA) has been studied for the online streaming automatic speech recognition (ASR) based on a sequence-to-sequence framework. In contrast to connectionist temporal classification (CTC), backward probabilities cannot be leveraged in the alignment marginalization process during training due to left-to-right dependency in the decoder. This results in the error propagation of alignments to subsequent token generation. To address this problem, we propose CTC-synchronous training (CTC-ST), in which MoChA uses CTC alignments to learn optimal monotonic alignments. Reference CTC alignments are extracted from a CTC branch sharing the same encoder with the decoder. The entire model is jointly optimized so that the expected boundaries from MoChA are synchronized with the alignments. Experimental evaluations of the TEDLIUM release-2 and Librispeech corpora show that the proposed method significantly improves recognition, especially for long utterances. We also show that CTC-ST can bring out the full potential of SpecAugment for MoChA.


 DOI: 10.21437/Interspeech.2020-1069

Cite as: Inaguma, H., Mimura, M., Kawahara, T. (2020) CTC-Synchronous Training for Monotonic Attention Model. Proc. Interspeech 2020, 571-575, DOI: 10.21437/Interspeech.2020-1069.


@inproceedings{Inaguma2020,
  author={Hirofumi Inaguma and Masato Mimura and Tatsuya Kawahara},
  title={{CTC-Synchronous Training for Monotonic Attention Model}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={571--575},
  doi={10.21437/Interspeech.2020-1069},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1069}
}