Sampling-Based Speech Parameter Generation Using Moment-Matching Networks

Shinnosuke Takamichi, Tomoki Koriyama, Hiroshi Saruwatari

This paper presents sampling-based speech parameter generation using moment-matching networks for Deep Neural Network (DNN)-based speech synthesis. Although people never produce exactly the same speech even if we try to express the same linguistic and para-linguistic information, typical statistical speech synthesis produces completely the same speech, i.e., there is no inter-utterance variation in synthetic speech. To give synthetic speech natural inter-utterance variation, this paper builds DNN acoustic models that make it possible to randomly sample speech parameters. The DNNs are trained so that they make the moments of generated speech parameters close to those of natural speech parameters. Since the variation of speech parameters is compressed into a low-dimensional simple prior noise vector, our algorithm has lower computation cost than direct sampling of speech parameters. As the first step towards generating synthetic speech that has natural inter-utterance variation, this paper investigates whether or not the proposed sampling-based generation deteriorates synthetic speech quality. In evaluation, we compare speech quality of conventional maximum likelihood-based generation and proposed sampling-based generation. The result demonstrates the proposed generation causes no degradation in speech quality.

 DOI: 10.21437/Interspeech.2017-362

Cite as: Takamichi, S., Koriyama, T., Saruwatari, H. (2017) Sampling-Based Speech Parameter Generation Using Moment-Matching Networks. Proc. Interspeech 2017, 3961-3965, DOI: 10.21437/Interspeech.2017-362.

  author={Shinnosuke Takamichi and Tomoki Koriyama and Hiroshi Saruwatari},
  title={Sampling-Based Speech Parameter Generation Using Moment-Matching Networks},
  booktitle={Proc. Interspeech 2017},