Nonparallel Emotional Speech Conversion

Jian Gao, Deep Chakraborty, Hamidou Tembine, Olaitan Olaleye


We propose a nonparallel data-driven emotional speech conversion method. It enables the transfer of emotion-related characteristics of a speech signal while preserving the speaker’s identity and linguistic content. Most existing approaches require parallel data and time alignment, which is not available in many real applications. We achieve nonparallel training based on an unsupervised style transfer technique, which learns a translation model between two distributions instead of a deterministic one-to-one mapping between paired examples. The conversion model consists of an encoder and a decoder for each emotion domain. We assume that the speech signal can be decomposed into an emotion-invariant content code and an emotion-related style code in latent space. Emotion conversion is performed by extracting and recombining the content code of the source speech and the style code of the target emotion. We tested our method on a nonparallel corpora with four emotions. The evaluation results show the effectiveness of our approach.


 DOI: 10.21437/Interspeech.2019-2878

Cite as: Gao, J., Chakraborty, D., Tembine, H., Olaleye, O. (2019) Nonparallel Emotional Speech Conversion. Proc. Interspeech 2019, 2858-2862, DOI: 10.21437/Interspeech.2019-2878.


@inproceedings{Gao2019,
  author={Jian Gao and Deep Chakraborty and Hamidou Tembine and Olaitan Olaleye},
  title={{Nonparallel Emotional Speech Conversion}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2858--2862},
  doi={10.21437/Interspeech.2019-2878},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2878}
}