MOSNet: Deep Learning-Based Objective Assessment for Voice Conversion

Chen-Chou Lo, Szu-Wei Fu, Wen-Chin Huang, Xin Wang, Junichi Yamagishi, Yu Tsao, Hsin-Min Wang


Existing objective evaluation metrics for voice conversion (VC) are not always correlated with human perception. Therefore, training VC models with such criteria may not effectively improve naturalness and similarity of converted speech. In this paper, we propose deep learning-based assessment models to predict human ratings of converted speech. We adopt the convolutional and recurrent neural network models to build a mean opinion score (MOS) predictor, termed as MOSNet. The proposed models are tested on large-scale listening test results of the Voice Conversion Challenge (VCC) 2018. Experimental results show that the predicted scores of the proposed MOSNet are highly correlated with human MOS ratings at the system level while being fairly correlated with human MOS ratings at the utterance level. Meanwhile, we have modified MOSNet to predict the similarity scores, and the preliminary results show that the predicted scores are also fairly correlated with human ratings. These results confirm that the proposed models could be used as a computational evaluator to measure the MOS of VC systems to reduce the need for expensive human rating.


 DOI: 10.21437/Interspeech.2019-2003

Cite as: Lo, C., Fu, S., Huang, W., Wang, X., Yamagishi, J., Tsao, Y., Wang, H. (2019) MOSNet: Deep Learning-Based Objective Assessment for Voice Conversion. Proc. Interspeech 2019, 1541-1545, DOI: 10.21437/Interspeech.2019-2003.


@inproceedings{Lo2019,
  author={Chen-Chou Lo and Szu-Wei Fu and Wen-Chin Huang and Xin Wang and Junichi Yamagishi and Yu Tsao and Hsin-Min Wang},
  title={{MOSNet: Deep Learning-Based Objective Assessment for Voice Conversion}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1541--1545},
  doi={10.21437/Interspeech.2019-2003},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2003}
}