16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Semi-Supervised Training of a Voice Conversion Mapping Function Using a Joint-Autoencoder

Seyed Hamidreza Mohammadi, Alexander Kain

Oregon Health & Science University, USA

Recently, researchers have begun to investigate Deep Neural Network (DNN) architectures as mapping functions in voice conversion systems. In this study, we propose a novel Stacked-Joint-Autoencoder (SJAE) architecture, which aims to find a common encoding of parallel source and target features. The SJAE is initialized from a Stacked-Autoencoder (SAE) that has been trained on a large general-purpose speech database. We also propose to train the SJAE using unrelated speakers that are similar to the source and target speaker, instead of using only the source and target speakers. The final DNN is constructed from the source-encoding part and the target-decoding part of the SJAE, and then fine-tuned using back-propagation. The use of this semi-supervised training approach allows us to use multiple frames during mapping, since we have previously learned the general structure of the acoustic space and also the general structure of similar source-target speaker mappings. We train two speaker conversions and compare several system configurations objectively and subjectively while varying the number of available training sentences. The results show that each of the individual contributions of SAE, SJAE, and using unrelated speakers to initialize the mapping function increases conversion performance.

Full Paper

Bibliographic reference.  Mohammadi, Seyed Hamidreza / Kain, Alexander (2015): "Semi-supervised training of a voice conversion mapping function using a joint-autoencoder", In INTERSPEECH-2015, 284-288.