Analysis by Adversarial Synthesis — A Novel Approach for Speech Vocoding

Ahmed Mustafa, Arijit Biswas, Christian Bergler, Julia Schottenhamml, Andreas Maier


Classical parametric speech coding techniques provide a compact representation for speech signals. This affords a very low transmission rate but with a reduced perceptual quality of the reconstructed signals. Recently, autoregressive deep generative models such as WaveNet and SampleRNN have been used as speech vocoders to scale up the perceptual quality of the reconstructed signals without increasing the coding rate. However, such models suffer from a very slow signal generation mechanism due to their sample-by-sample modelling approach. In this work, we introduce a new methodology for neural speech vocoding based on generative adversarial networks (GANs). A fake speech signal is generated from a very compressed representation of the glottal excitation using conditional GANs as a deep generative model. This fake speech is then refined using the LPC parameters of the original speech signal to obtain a natural reconstruction. The reconstructed speech waveforms based on this approach show a higher perceptual quality than the classical vocoder counterparts according to subjective and objective evaluation scores for a dataset of 30 male and female speakers. Moreover, the usage of GANs enables to generate signals in one-shot compared to autoregressive generative models. This makes GANs promising for exploration to implement high-quality neural vocoders.


 DOI: 10.21437/Interspeech.2019-1195

Cite as: Mustafa, A., Biswas, A., Bergler, C., Schottenhamml, J., Maier, A. (2019) Analysis by Adversarial Synthesis — A Novel Approach for Speech Vocoding. Proc. Interspeech 2019, 191-195, DOI: 10.21437/Interspeech.2019-1195.


@inproceedings{Mustafa2019,
  author={Ahmed Mustafa and Arijit Biswas and Christian Bergler and Julia Schottenhamml and Andreas Maier},
  title={{Analysis by Adversarial Synthesis — A Novel Approach for Speech Vocoding}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={191--195},
  doi={10.21437/Interspeech.2019-1195},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1195}
}