Fine-Grained Robust Prosody Transfer for Single-Speaker Neural Text-To-Speech

Viacheslav Klimkov, Srikanth Ronanki, Jonas Rohnke, Thomas Drugman


We present a neural text-to-speech system for fine-grained prosody transfer from one speaker to another. Conventional approaches for end-to-end prosody transfer typically use either fixed-dimensional or variable-length prosody embedding via a secondary attention to encode the reference signal. However, when trained on a single-speaker dataset, the conventional prosody transfer systems are not robust enough to speaker variability, especially in the case of a reference signal coming from an unseen speaker. Therefore, we propose decoupling of the reference signal alignment from the overall system. For this purpose, we pre-compute phoneme-level time stamps and use them to aggregate prosodic features per phoneme, injecting them into a sequence-to-sequence text-to-speech system. We incorporate a variational auto-encoder to further enhance the latent representation of prosody embeddings. We show that our proposed approach is significantly more stable and achieves reliable prosody transplantation from an unseen speaker. We also propose a solution to the use case in which the transcription of the reference signal is absent. We evaluate all our proposed methods using both objective and subjective listening tests.


 DOI: 10.21437/Interspeech.2019-2571

Cite as: Klimkov, V., Ronanki, S., Rohnke, J., Drugman, T. (2019) Fine-Grained Robust Prosody Transfer for Single-Speaker Neural Text-To-Speech. Proc. Interspeech 2019, 4440-4444, DOI: 10.21437/Interspeech.2019-2571.


@inproceedings{Klimkov2019,
  author={Viacheslav Klimkov and Srikanth Ronanki and Jonas Rohnke and Thomas Drugman},
  title={{Fine-Grained Robust Prosody Transfer for Single-Speaker Neural Text-To-Speech}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4440--4444},
  doi={10.21437/Interspeech.2019-2571},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2571}
}