Sequence-to-Sequence Voice Conversion with Similarity Metric Learned Using Generative Adversarial Networks

Takuhiro Kaneko, Hirokazu Kameoka, Kaoru Hiramatsu, Kunio Kashino


We propose a training framework for sequence-to-sequence voice conversion (SVC). A well-known problem regarding a conventional VC framework is that acoustic-feature sequences generated from a converter tend to be over-smoothed, resulting in buzzy-sounding speech. This is because a particular form of similarity metric or distribution for parameter training of the acoustic model is assumed so that the generated feature sequence that averagely fits the training target example is considered optimal. This over-smoothing occurs as long as a manually constructed similarity metric is used. To overcome this limitation, our proposed SVC framework uses a similarity metric implicitly derived from a generative adversarial network, enabling the measurement of the distance in the high-level abstract space. This would enable the model to mitigate the over-smoothing problem caused in the low-level data space. Furthermore, we use convolutional neural networks to model the long-range context-dependencies. This also enables the similarity metric to have a shift-invariant property; thus, making the model robust against misalignment errors involved in the parallel data. We tested our framework on a non-native-to-native VC task. The experimental results revealed that the use of the proposed framework had a certain effect in improving naturalness, clarity, and speaker individuality.


 DOI: 10.21437/Interspeech.2017-970

Cite as: Kaneko, T., Kameoka, H., Hiramatsu, K., Kashino, K. (2017) Sequence-to-Sequence Voice Conversion with Similarity Metric Learned Using Generative Adversarial Networks. Proc. Interspeech 2017, 1283-1287, DOI: 10.21437/Interspeech.2017-970.


@inproceedings{Kaneko2017,
  author={Takuhiro Kaneko and Hirokazu Kameoka and Kaoru Hiramatsu and Kunio Kashino},
  title={Sequence-to-Sequence Voice Conversion with Similarity Metric Learned Using Generative Adversarial Networks},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1283--1287},
  doi={10.21437/Interspeech.2017-970},
  url={http://dx.doi.org/10.21437/Interspeech.2017-970}
}