Unsupervised End-to-End Learning of Discrete Linguistic Units for Voice Conversion

Andy T. Liu, Po-chun Hsu, Hung-Yi Lee


We present an unsupervised end-to-end training scheme where we discover discrete subword units from speech without using any labels. The discrete subword units are learned under an ASR-TTS autoencoder reconstruction setting, where an ASR-Encoder is trained to discover a set of common linguistic units given a variety of speakers, and a TTS-Decoder trained to project the discovered units back to the designated speech. We propose a discrete encoding method, Multilabel-Binary Vectors (MBV), to make the ASR-TTS autoencoder differentiable. We found that the proposed encoding method offers automatic extraction of speech content from speaker style, and is sufficient to cover full linguistic content in a given language. Therefore, the TTS-Decoder can synthesize speech with the same content as the input of ASR-Encoder but with different speaker characteristics, which achieves voice conversion (VC). We further improve the quality of VC using adversarial training, where we train a TTS-Patcher that augments the output of TTS-Decoder. Objective and subjective evaluations show that the proposed approach offers strong VC results as it eliminates speaker identity while preserving content within speech. In the ZeroSpeech 2019 Challenge, we achieved outstanding performance in terms of low bitrate.


 DOI: 10.21437/Interspeech.2019-2048

Cite as: Liu, A.T., Hsu, P., Lee, H. (2019) Unsupervised End-to-End Learning of Discrete Linguistic Units for Voice Conversion. Proc. Interspeech 2019, 1108-1112, DOI: 10.21437/Interspeech.2019-2048.


@inproceedings{Liu2019,
  author={Andy T. Liu and Po-chun Hsu and Hung-Yi Lee},
  title={{Unsupervised End-to-End Learning of Discrete Linguistic Units for Voice Conversion}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1108--1112},
  doi={10.21437/Interspeech.2019-2048},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2048}
}