INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Sequence Error (SE) Minimization Training of Neural Network for Voice Conversion

Feng-Long Xie (1), Yao Qian (2), Yuchen Fan (2), Frank K. Soong (2), Haifeng Li (1)

(1) Harbin Institute of Technology, China
(2) Microsoft, China

Neural network (NN) based voice conversion, which employs a nonlinear function to map the features from a source to a target speaker, has been shown to outperform GMM-based voice conversion approach [4–7]. However, there are still limitations to be overcome in NN-based voice conversion, e.g. NN is trained on a Frame Error (FE) minimization criterion and the corresponding weights are adjusted to minimize the error squares over the whole source-target, stereo training data set. In this paper, we use the idea of sentence optimization based, minimum generation error (MGE) training in HMM-based TTS synthesis, and modify the FE minimization to Sequence Error (SE) minimization in NN training for voice conversion. The conversion error over a training sentence from a source speaker to a target speaker is minimized via a gradient descent-based, back propagation (BP) procedure. Experimental results show that the speech converted by the NN, which is first trained with frame error minimization and then refined with sequence error minimization, sounds subjectively better than the converted speech by NN trained with frame error minimization only. Scores on both naturalness and similarity to the target speaker are improved.

Full Paper

Bibliographic reference.  Xie, Feng-Long / Qian, Yao / Fan, Yuchen / Soong, Frank K. / Li, Haifeng (2014): "Sequence error (SE) minimization training of neural network for voice conversion", In INTERSPEECH-2014, 2283-2287.