Non-Parallel Voice Conversion with Cyclic Variational Autoencoder

Patrick Lumban Tobing, Yi-Chiao Wu, Tomoki Hayashi, Kazuhiro Kobayashi, Tomoki Toda


In this paper, we present a novel technique for a non-parallel voice conversion (VC) with the use of cyclic variational auto-encoder (CycleVAE)-based spectral modeling. In a variational autoencoder (VAE) framework, a latent space, usually with a Gaussian prior, is used to encode a set of input features. In a VAE-based VC, the encoded latent features are fed into a decoder, along with speaker-coding features, to generate estimated spectra with either the original speaker identity (reconstructed) or another speaker identity (converted). Due to the non-parallel modeling condition, the converted spectra can not be directly optimized, which heavily degrades the performance of a VAE-based VC. In this work, to overcome this problem, we propose to use CycleVAE-based spectral model that indirectly optimizes the conversion flow by recycling the converted features back into the system to obtain corresponding cyclic reconstructed spectra that can be directly optimized. The cyclic flow can be continued by using the cyclic reconstructed features as input for the next cycle. The experimental results demonstrate the effectiveness of the proposed CycleVAE-based VC, which yields higher accuracy of converted spectra, generates latent features with higher correlation degree, and significantly improves the quality and conversion accuracy of the converted speech.


 DOI: 10.21437/Interspeech.2019-2307

Cite as: Tobing, P.L., Wu, Y., Hayashi, T., Kobayashi, K., Toda, T. (2019) Non-Parallel Voice Conversion with Cyclic Variational Autoencoder. Proc. Interspeech 2019, 674-678, DOI: 10.21437/Interspeech.2019-2307.


@inproceedings{Tobing2019,
  author={Patrick Lumban Tobing and Yi-Chiao Wu and Tomoki Hayashi and Kazuhiro Kobayashi and Tomoki Toda},
  title={{Non-Parallel Voice Conversion with Cyclic Variational Autoencoder}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={674--678},
  doi={10.21437/Interspeech.2019-2307},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2307}
}