Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data

Kun Zhou, Berrak Sisman, Haizhou Li


Emotional voice conversion aims to convert the spectrum and prosody to change the emotional patterns of speech, while preserving the speaker identity and linguistic content. Many studies require parallel speech data between different emotional patterns, which is not practical in real life. Moreover, they often model the conversion of fundamental frequency (F0) with a simple linear transform. As F0 is a key aspect of intonation that is hierarchical in nature, we believe that it is more adequate to model F0 in different temporal scales by using wavelet transform. We propose a CycleGAN network to find an optimal pseudo pair from non-parallel training data by learning forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. We also study the use of continuous wavelet transform (CWT) to decompose F0 into ten temporal scales, that describes speech prosody at different time resolution, for effective F0 conversion. Experimental results show that our proposed framework outperforms the baselines both in objective and subjective evaluations.


 DOI: 10.21437/Odyssey.2020-33

Cite as: Zhou, K., Sisman, B., Li, H. (2020) Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data. Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 230-237, DOI: 10.21437/Odyssey.2020-33.


@inproceedings{Zhou2020,
  author={Kun Zhou and Berrak Sisman and Haizhou Li},
  title={{Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data}},
  year=2020,
  booktitle={Proc. Odyssey 2020 The Speaker and Language Recognition Workshop},
  pages={230--237},
  doi={10.21437/Odyssey.2020-33},
  url={http://dx.doi.org/10.21437/Odyssey.2020-33}
}