Semi-Supervised Sequence-to-Sequence ASR Using Unpaired Speech and Text

Murali Karthick Baskar, Shinji Watanabe, Ramon Astudillo, Takaaki Hori, Lukáš Burget, Jan Černocký


Sequence-to-sequence automatic speech recognition (ASR) models require large quantities of data to attain high performance. For this reason, there has been a recent surge in interest for unsupervised and semi-supervised training in such models. This work builds upon recent results showing notable improvements in semi-supervised training using cycle-consistency and related techniques. Such techniques derive training procedures and losses able to leverage unpaired speech and/or text data by combining ASR with Text-to-Speech (TTS) models. In particular, this work proposes a new semi-supervised loss combining an end-to-end differentiable ASR→TTS loss with TTS→ASR loss. The method is able to leverage both unpaired speech and text data to outperform recently proposed related techniques in terms of %WER. We provide extensive results analyzing the impact of data quantity and speech and text modalities and show consistent gains across WSJ and Librispeech corpora. Our code is provided in ESPnet to reproduce the experiments.


 DOI: 10.21437/Interspeech.2019-3167

Cite as: Baskar, M.K., Watanabe, S., Astudillo, R., Hori, T., Burget, L., Černocký, J. (2019) Semi-Supervised Sequence-to-Sequence ASR Using Unpaired Speech and Text. Proc. Interspeech 2019, 3790-3794, DOI: 10.21437/Interspeech.2019-3167.


@inproceedings{Baskar2019,
  author={Murali Karthick Baskar and Shinji Watanabe and Ramon Astudillo and Takaaki Hori and Lukáš Burget and Jan Černocký},
  title={{Semi-Supervised Sequence-to-Sequence ASR Using Unpaired Speech and Text}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3790--3794},
  doi={10.21437/Interspeech.2019-3167},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3167}
}