Recent works have shown that modelling raw waveform directly from text in an end-to-end (E2E) fashion produces morenatural-sounding speech than traditional neural text-to-speech(TTS) systems based on a cascade or two-stage approach. However, current E2E state-of-the-art models are computationallycomplex and memory-consuming, making them unsuitable forreal-time offline on-device applications in low-resource scenarios. To address this issue, we propose a Lightweight E2E-TTS(LE2E) model that generates high-quality speech requiring minimal computational resources. We evaluate the proposed modelon the LJSpeech dataset and show that it achieves state-of-the-art performance while being up to 90% smaller in terms ofmodel parameters and 10× faster in real-time-factor. Furthermore, we demonstrate that the proposed E2E training paradigmachieves better quality compared to an equivalent architecturetrained in a two-stage approach. Our results suggest that LE2Eis a promising approach for developing real-time, high quality,low-resource TTS applications for on-device applications.
Cite as: Vecino, B.T., Gabrys, A., Matwicki, D., Pomirski, A., Iddon, T., Cotescu, M., Lorenzo-Trueba, J. (2023) Lightweight End-to-end Text-to-speech Synthesis for low resource on-device applications. Proc. 12th ISCA Speech Synthesis Workshop (SSW2023), 225-229, doi: 10.21437/SSW.2023-35
@inproceedings{vecino23_ssw, author={Biel Tura Vecino and Adam Gabrys and Daniel Matwicki and Andrzej Pomirski and Tom Iddon and Marius Cotescu and Jaime Lorenzo-Trueba}, title={{Lightweight End-to-end Text-to-speech Synthesis for low resource on-device applications}}, year=2023, booktitle={Proc. 12th ISCA Speech Synthesis Workshop (SSW2023)}, pages={225--229}, doi={10.21437/SSW.2023-35} }