INTERSPEECH 2008
9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

Robustness of HMM-Based Speech Synthesis

Junichi Yamagishi (1), Zhen-Hua Ling (2), Simon King (1)

(1) University of Edinburgh, UK; (2) University of Science & Technology of China, China

As speech synthesis techniques become more advanced, we are able to consider building high-quality voices from data collected outside the usual highly-controlled recording studio environment. This presents new challenges that are not present in conventional text-to-speech synthesis: the available speech data are not perfectly clean, the recording conditions are not consistent, and/or the phonetic balance of the material is not ideal. Although a clear picture of the performance of various speech synthesis techniques (e.g., concatenative, HMM-based or hybrid) under good conditions is provided by the Blizzard Challenge, it is not well understood how robust these algorithms are to less favourable conditions. In this paper, we analyse the performance of several speech synthesis methods under such conditions. This is, as far as we know, a new research topic: "Robust speech synthesis." As a consequence of our investigations, we propose a new robust training method for the HMM-based speech synthesis in for use with speech data collected in unfavourable conditions.

Full Paper

Bibliographic reference.  Yamagishi, Junichi / Ling, Zhen-Hua / King, Simon (2008): "Robustness of HMM-based speech synthesis", In INTERSPEECH-2008, 581-584.