State-of-the-art text-to-speech (TTS) systems have utilized pre-trained language models (PLMs) to enhance prosody and create more natural-sounding speech. However, while PLMs havebeen extensively researched for natural language understanding(NLU), their impact on TTS has been overlooked. In this study,we aim to address this gap by conducting a comparative analysis of different PLMs for two TTS tasks: prosody predictionand pause prediction. Firstly, we trained a prosody predictionmodel using 15 different PLMs. Our findings revealed a logarithmic relationship between model size and quality, as well assignificant performance differences between neutral and expressive prosody. Secondly, we employed PLMs for pause prediction and found that the task was less sensitive to small models.We also identified a strong correlation between our empiricalresults and the GLUE scores obtained for these language models. To the best of our knowledge, this is the first study of itskind to investigate the impact of different PLMs on TTS.
Cite as: Moya, M.G., Karanasou, P., Karlapati, S., Schnell, B., Peinelt, N., Moinet, A., Drugman, T. (2023) A Comparative Analysis of Pretrained Language Models for Text-to-Speech. Proc. 12th ISCA Speech Synthesis Workshop (SSW2023), 14-20, doi: 10.21437/SSW.2023-3
@inproceedings{moya23_ssw, author={Marcel Granero Moya and Penny Karanasou and Sri Karlapati and Bastian Schnell and Nicole Peinelt and Alexis Moinet and Thomas Drugman}, title={{A Comparative Analysis of Pretrained Language Models for Text-to-Speech}}, year=2023, booktitle={Proc. 12th ISCA Speech Synthesis Workshop (SSW2023)}, pages={14--20}, doi={10.21437/SSW.2023-3} }