Emotional Voice Conversion with Adaptive Scales F0 Based on Wavelet Transform Using Limited Amount of Emotional Data

Zhaojie Luo, Jinhui Chen, Tetsuya Takiguchi, Yasuo Ariki


Deep learning techniques have been successfully applied to speech processing. Typically, neural networks (NNs) are very effective in processing nonlinear features, such as mel cepstral coefficients (MCC), which represent the spectrum features in voice conversion (VC) tasks. Despite these successes, the approach is restricted to problems with moderate dimension and sufficient data. Thus, in emotional VC tasks, it is hard to deal with a simple representation of fundamental frequency (F0), which is the most important feature in emotional voice representation, Another problem is that there are insufficient emotional data for training. To deal with these two problems, in this paper, we propose the adaptive scales continuous wavelet transform (AS-CWT) method to systematically capture the F0 features of different temporal scales, which can represent different prosodic levels ranging from micro-prosody to sentence levels. Meanwhile, we also use the pre-trained conversion functions obtained from other emotional datasets to synthesize new emotional data as additional training samples for target emotional voice conversion. Experimental results indicate that our proposed method achieves the best performance in both objective and subjective evaluations.


 DOI: 10.21437/Interspeech.2017-984

Cite as: Luo, Z., Chen, J., Takiguchi, T., Ariki, Y. (2017) Emotional Voice Conversion with Adaptive Scales F0 Based on Wavelet Transform Using Limited Amount of Emotional Data. Proc. Interspeech 2017, 3399-3403, DOI: 10.21437/Interspeech.2017-984.


@inproceedings{Luo2017,
  author={Zhaojie Luo and Jinhui Chen and Tetsuya Takiguchi and Yasuo Ariki},
  title={Emotional Voice Conversion with Adaptive Scales F0 Based on Wavelet Transform Using Limited Amount of Emotional Data},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3399--3403},
  doi={10.21437/Interspeech.2017-984},
  url={http://dx.doi.org/10.21437/Interspeech.2017-984}
}