Pretraining by Backtranslation for End-to-End ASR in Low-Resource Settings

Matthew Wiesner, Adithya Renduchintala, Shinji Watanabe, Chunxi Liu, Najim Dehak, Sanjeev Khudanpur

We explore training attention-based encoder-decoder ASR in low-resource settings. These models perform poorly when trained on small amounts of transcribed speech, in part because they depend on having sufficient target-side text to train the attention and decoder networks. In this paper we address this shortcoming by pretraining our network parameters using only text-based data and transcribed speech from other languages. We analyze the relative contributions of both sources of data. Across 3 test languages, our text-based approach resulted in a 20% average relative improvement over a text-based augmentation technique without pretraining. Using transcribed speech from nearby languages gives a further 20–30% relative reduction in character error rate.

 DOI: 10.21437/Interspeech.2019-3254

Cite as: Wiesner, M., Renduchintala, A., Watanabe, S., Liu, C., Dehak, N., Khudanpur, S. (2019) Pretraining by Backtranslation for End-to-End ASR in Low-Resource Settings. Proc. Interspeech 2019, 4375-4379, DOI: 10.21437/Interspeech.2019-3254.

  author={Matthew Wiesner and Adithya Renduchintala and Shinji Watanabe and Chunxi Liu and Najim Dehak and Sanjeev Khudanpur},
  title={{Pretraining by Backtranslation for End-to-End ASR in Low-Resource Settings}},
  booktitle={Proc. Interspeech 2019},