Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs

Rob Clark, Hanna Silen, Tom Kenter, Ralph Leith


Text-to-speech systems are typically evaluated on single sentences. When long-form content, such as data consisting of full paragraphs or dialogues is considered, evaluating sentences in isolation is not always appropriate as the context in which the sentences are synthesized is missing. In this paper, we investigate three different ways of evaluating the naturalness of long-form text-to-speech synthesis. We compare the results obtained from evaluating sentences in isolation, evaluating whole paragraphs of speech, and presenting a selection of speech or text as context and evaluating the subsequent speech. We find that, even though these three evaluations are based upon the same material, the outcomes differ per setting, and moreover that these outcomes do not necessarily correlate with each other. We show that our findings are consistent between a single speaker setting of read paragraphs and a two-speaker dialogue scenario. We conclude that to evaluate the quality of long-form speech, the traditional way of evaluating sentences in isolation does not suffice, and that multiple evaluations are required.


 DOI: 10.21437/SSW.2019-18

Cite as: Clark, R., Silen, H., Kenter, T., Leith, R. (2019) Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs. Proc. 10th ISCA Speech Synthesis Workshop, 99-104, DOI: 10.21437/SSW.2019-18.


@inproceedings{Clark2019,
  author={Rob Clark and Hanna Silen and Tom Kenter and Ralph Leith},
  title={{Evaluating Long-form Text-to-Speech: Comparing the Ratings of Sentences and Paragraphs}},
  year=2019,
  booktitle={Proc. 10th ISCA Speech Synthesis Workshop},
  pages={99--104},
  doi={10.21437/SSW.2019-18},
  url={http://dx.doi.org/10.21437/SSW.2019-18}
}