End-to-End Text-to-Speech for Low-Resource Languages by Cross-Lingual Transfer Learning

Yuan-Jui Chen, Tao Tu, Cheng-chieh Yeh, Hung-Yi Lee


End-to-end text-to-speech (TTS) has shown great success on large quantities of paired text plus speech data. However, laborious data collection remains difficult for at least 95% of the languages over the world, which hinders the development of TTS in different languages. In this paper, we aim to build TTS systems for such low-resource (target) languages where only very limited paired data are available. We show such TTS can be effectively constructed by transferring knowledge from a high-resource (source) language. Since the model trained on source language cannot be directly applied to target language due to input space mismatch, we propose a method to learn a mapping between source and target linguistic symbols. Benefiting from this learned mapping, pronunciation information can be preserved throughout the transferring procedure. Preliminary experiments show that we only need around 15 minutes of paired data to obtain a relatively good TTS system. Furthermore, analytic studies demonstrated that the automatically discovered mapping correlate well with the phonetic expertise.


 DOI: 10.21437/Interspeech.2019-2730

Cite as: Chen, Y., Tu, T., Yeh, C., Lee, H. (2019) End-to-End Text-to-Speech for Low-Resource Languages by Cross-Lingual Transfer Learning. Proc. Interspeech 2019, 2075-2079, DOI: 10.21437/Interspeech.2019-2730.


@inproceedings{Chen2019,
  author={Yuan-Jui Chen and Tao Tu and Cheng-chieh Yeh and Hung-Yi Lee},
  title={{End-to-End Text-to-Speech for Low-Resource Languages by Cross-Lingual Transfer Learning}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2075--2079},
  doi={10.21437/Interspeech.2019-2730},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2730}
}