Learning Latent Representations for Speech Generation and Transformation

Wei-Ning Hsu, Yu Zhang, James Glass


An ability to model a generative process and learn a latent representation for speech in an unsupervised fashion will be crucial to process vast quantities of unlabelled speech data. Recently, deep probabilistic generative models such as Variational Autoencoders (VAEs) have achieved tremendous success in modeling natural images. In this paper, we apply a convolutional VAE to model the generative process of natural speech. We derive latent space arithmetic operations to disentangle learned latent representations. We demonstrate the capability of our model to modify the phonetic content or the speaker identity for speech segments using the derived operations, without the need for parallel supervisory data.


 DOI: 10.21437/Interspeech.2017-349

Cite as: Hsu, W., Zhang, Y., Glass, J. (2017) Learning Latent Representations for Speech Generation and Transformation. Proc. Interspeech 2017, 1273-1277, DOI: 10.21437/Interspeech.2017-349.


@inproceedings{Hsu2017,
  author={Wei-Ning Hsu and Yu Zhang and James Glass},
  title={Learning Latent Representations for Speech Generation and Transformation},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1273--1277},
  doi={10.21437/Interspeech.2017-349},
  url={http://dx.doi.org/10.21437/Interspeech.2017-349}
}