Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation

Ryuichi Yamamoto, Eunwoo Song, Jae-Min Kim


This paper proposes an effective probability density distillation (PDD) algorithm for WaveNet-based parallel waveform generation (PWG) systems. Recently proposed teacher-student frameworks in the PWG system have successfully achieved a real-time generation of speech signals. However, the difficulties optimizing the PDD criteria without auxiliary losses result in quality degradation of synthesized speech. To generate more natural speech signals within the teacher-student framework, we propose a novel optimization criterion based on generative adversarial networks (GANs). In the proposed method, the inverse autoregressive flow-based student model is incorporated as a generator in the GAN framework, and jointly optimized by the PDD mechanism with the proposed adversarial learning method. As this process encourages the student to model the distribution of realistic speech waveform, the perceptual quality of the synthesized speech becomes much more natural. Our experimental results verify that the PWG systems with the proposed method outperform both those using conventional approaches, and also autoregressive generation systems with a well-trained teacher WaveNet.


 DOI: 10.21437/Interspeech.2019-1965

Cite as: Yamamoto, R., Song, E., Kim, J. (2019) Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation. Proc. Interspeech 2019, 699-703, DOI: 10.21437/Interspeech.2019-1965.


@inproceedings{Yamamoto2019,
  author={Ryuichi Yamamoto and Eunwoo Song and Jae-Min Kim},
  title={{Probability Density Distillation with Generative Adversarial Networks for High-Quality Parallel Waveform Generation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={699--703},
  doi={10.21437/Interspeech.2019-1965},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1965}
}