Voice Conversion with Conditional SampleRNN

Cong Zhou, Michael Horgan, Vivek Kumar, Cristina Vasco, Dan Darcy


Here we present a novel approach to conditioning the SampleRNN [1] generative model for voice conversion (VC). Conventional methods for VC modify the perceived speaker identity by converting between source and target acoustic features. Our approach focuses on preserving voice content and depends on the generative network to learn voice style. We first train a multi-speaker SampleRNN model conditioned on linguistic features, pitch contour and speaker identity using a multi-speaker speech corpus. Voice-converted speech is generated using linguistic features and pitch contour extracted from the source speaker and the target speaker identity. We demonstrate that our system is capable of many-to-many voice conversion without requiring parallel data, enabling broad applications. Subjective evaluation demonstrates that our approach outperforms conventional VC methods.


 DOI: 10.21437/Interspeech.2018-1121

Cite as: Zhou, C., Horgan, M., Kumar, V., Vasco, C., Darcy, D. (2018) Voice Conversion with Conditional SampleRNN. Proc. Interspeech 2018, 1973-1977, DOI: 10.21437/Interspeech.2018-1121.


@inproceedings{Zhou2018,
  author={Cong Zhou and Michael Horgan and Vivek Kumar and Cristina Vasco and Dan Darcy},
  title={Voice Conversion with Conditional SampleRNN},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1973--1977},
  doi={10.21437/Interspeech.2018-1121},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1121}
}