Conditional End-to-End Audio Transforms

Albert Haque, Michelle Guo, Prateek Verma


We present an end-to-end method for transforming audio from one style to another. For the case of speech, by conditioning on speaker identities, we can train a single model to transform words spoken by multiple people into multiple target voices. For the case of music, we can specify musical instruments and achieve the same result. Architecturally, our method is a fully-differentiable sequence-to-sequence model based on convolutional and hierarchical recurrent neural networks. It is designed to capture long-term acoustic dependencies, requires minimal post-processing and produces realistic audio transforms. Ablation studies confirm that our model can separate acoustic properties from musical and language content at different receptive fields. Empirically, our method achieves competitive performance on community-standard datasets.


 DOI: 10.21437/Interspeech.2018-38

Cite as: Haque, A., Guo, M., Verma, P. (2018) Conditional End-to-End Audio Transforms. Proc. Interspeech 2018, 2295-2299, DOI: 10.21437/Interspeech.2018-38.


@inproceedings{Haque2018,
  author={Albert Haque and Michelle Guo and Prateek Verma},
  title={Conditional End-to-End Audio Transforms},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2295--2299},
  doi={10.21437/Interspeech.2018-38},
  url={http://dx.doi.org/10.21437/Interspeech.2018-38}
}