A KL Divergence and DNN-Based Approach to Voice Conversion without Parallel Training Sentences

Feng-Long Xie, Frank K. Soong, Haifeng Li


We extend our recently proposed approach to cross-lingual TTS training to voice conversion, without using parallel training sentences. It employs Speaker Independent, Deep Neural Net (SI-DNN) ASR to equalize the difference between source and target speakers and Kullback-Leibler Divergence (KLD) to convert spectral parameters probabilistically in the phonetic space via ASR senone posterior probabilities of the two speakers. With or without knowing the transcriptions of the target speaker’s training speech, the approach can be either supervised or unsupervised. In a supervised mode, where adequate training data of the target speaker with transcriptions is used to train a GMM-HMM TTS of the target speaker, each frame of the source speakers input data is mapped to the closest senone in thus trained TTS. The mapping is done via the posterior probabilities computed by SI-DNN ASR and the minimum KLD matching. In a unsupervised mode, all training data of the target speaker is first grouped into phonetic clusters where KLD is used as the sole distortion measure. Once the phonetic clusters are trained, each frame of the source speakers input is then mapped to the mean of the closest phonetic cluster. The final converted speech is generated with the max probability trajectory generation algorithm. Both objective and subjective evaluations show the proposed approach can achieve higher speaker similarity and better spectral distortions, when comparing with the baseline system based upon our sequential error minimization trained DNN algorithm.


DOI: 10.21437/Interspeech.2016-116

Cite as

Xie, F., Soong, F.K., Li, H. (2016) A KL Divergence and DNN-Based Approach to Voice Conversion without Parallel Training Sentences. Proc. Interspeech 2016, 287-291.

Bibtex
@inproceedings{Xie+2016,
author={Feng-Long Xie and Frank K. Soong and Haifeng Li},
title={A KL Divergence and DNN-Based Approach to Voice Conversion without Parallel Training Sentences},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-116},
url={http://dx.doi.org/10.21437/Interspeech.2016-116},
pages={287--291}
}