Parrotron: An End-to-End Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation

Fadi Biadsy, Ron J. Weiss, Pedro J. Moreno, Dimitri Kanvesky, Ye Jia


We describe Parrotron, an end-to-end-trained speech-to-speech conversion model that maps an input spectrogram directly to another spectrogram, without utilizing any intermediate discrete representation. The network is composed of an encoder, spectrogram and phoneme decoders, followed by a vocoder to synthesize a time-domain waveform. We demonstrate that this model can be trained to normalize speech from any speaker regardless of accent, prosody, and background noise, into the voice of a single canonical target speaker with a fixed accent and consistent articulation and prosody. We further show that this normalization model can be adapted to normalize highly atypical speech from a deaf speaker, resulting in significant improvements in intelligibility and naturalness, measured via a speech recognizer and listening tests. Finally, demonstrating the utility of this model on other speech tasks, we show that the same model architecture can be trained to perform a speech separation task.


 DOI: 10.21437/Interspeech.2019-1789

Cite as: Biadsy, F., Weiss, R.J., Moreno, P.J., Kanvesky, D., Jia, Y. (2019) Parrotron: An End-to-End Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation. Proc. Interspeech 2019, 4115-4119, DOI: 10.21437/Interspeech.2019-1789.


@inproceedings{Biadsy2019,
  author={Fadi Biadsy and Ron J. Weiss and Pedro J. Moreno and Dimitri Kanvesky and Ye Jia},
  title={{Parrotron: An End-to-End Speech-to-Speech Conversion Model and its Applications to Hearing-Impaired Speech and Speech Separation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4115--4119},
  doi={10.21437/Interspeech.2019-1789},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1789}
}