LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech

Heiga Zen, Viet Dang, Rob Clark, Yu Zhang, Ron J. Weiss, Ye Jia, Zhifeng Chen, Yonghui Wu


This paper introduces a new speech corpus called “LibriTTS” designed for text-to-speech use. It is derived from the original audio and text materials of the LibriSpeech corpus, which has been used for training and evaluating automatic speech recognition systems. The new corpus inherits desired properties of the LibriSpeech corpus while addressing a number of issues which make LibriSpeech less than ideal for text-to-speech work. The released corpus consists of 585 hours of speech data at 24kHz sampling rate from 2,456 speakers and the corresponding texts. Experimental results show that neural end-to-end TTS models trained from the LibriTTS corpus achieved above 4.0 in mean opinion scores in naturalness in five out of six evaluation speakers. The corpus is freely available for download from http://www.openslr.org/60/.


 DOI: 10.21437/Interspeech.2019-2441

Cite as: Zen, H., Dang, V., Clark, R., Zhang, Y., Weiss, R.J., Jia, Y., Chen, Z., Wu, Y. (2019) LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech. Proc. Interspeech 2019, 1526-1530, DOI: 10.21437/Interspeech.2019-2441.


@inproceedings{Zen2019,
  author={Heiga Zen and Viet Dang and Rob Clark and Yu Zhang and Ron J. Weiss and Ye Jia and Zhifeng Chen and Yonghui Wu},
  title={{LibriTTS: A Corpus Derived from LibriSpeech for Text-to-Speech}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1526--1530},
  doi={10.21437/Interspeech.2019-2441},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2441}
}