Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations

Ju-chieh Chou, Cheng-chieh Yeh, Hung-yi Lee, Lin-shan Lee


Recently, cycle-consistent adversarial network (Cycle-GAN) has been successfully applied to voice conversion to a different speaker without parallel data, although in those approaches an individual model is needed for each target speaker. In this paper, we propose an adversarial learning framework for voice conversion, with which a single model can be trained to convert the voice to many different speakers, all without parallel data, by separating the speaker characteristics from the linguistic content in speech signals. An autoencoder is first trained to extract speaker-independent latent representations and speaker embedding separately using another auxiliary speaker classifier to regularize the latent representation. The decoder then takes the speaker-independent latent representation and the target speaker embedding as the input to generate the voice of the target speaker with the linguistic content of the source utterance. The quality of decoder output is further improved by patching with the residual signal produced by another pair of generator and discriminator. A target speaker set size of 20 was tested in the preliminary experiments and very good voice quality was obtained. Conventional voice conversion metrics are reported. We also show that the speaker information has been properly reduced from the latent representations.


 DOI: 10.21437/Interspeech.2018-1830

Cite as: Chou, J., Yeh, C., Lee, H., Lee, L. (2018) Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations. Proc. Interspeech 2018, 501-505, DOI: 10.21437/Interspeech.2018-1830.


@inproceedings{Chou2018,
  author={Ju-chieh Chou and Cheng-chieh Yeh and Hung-yi Lee and Lin-shan Lee},
  title={Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={501--505},
  doi={10.21437/Interspeech.2018-1830},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1830}
}