Building a Mixed-Lingual Neural TTS System with Only Monolingual Data

Liumeng Xue, Wei Song, Guanghui Xu, Lei Xie, Zhizheng Wu


When deploying a Chinese neural Text-to-Speech (TTS) system, one of the challenges is to synthesize Chinese utterances with English phrases or words embedded. This paper looks into the problem in the encoder-decoder framework when only monolingual data from a target speaker is available. Specifically, we view the problem from two aspects: speaker consistency within an utterance and naturalness. We start the investigation with an average voice model which is built from multi-speaker monolingual data, i.e., Mandarin and English data. On the basis of that, we look into speaker embedding for speaker consistency within an utterance and phoneme embedding for naturalness and intelligibility, and study the choice of data for model training. We report the findings and discuss the challenges to build a mixed-lingual TTS system with only monolingual data.


 DOI: 10.21437/Interspeech.2019-3191

Cite as: Xue, L., Song, W., Xu, G., Xie, L., Wu, Z. (2019) Building a Mixed-Lingual Neural TTS System with Only Monolingual Data. Proc. Interspeech 2019, 2060-2064, DOI: 10.21437/Interspeech.2019-3191.


@inproceedings{Xue2019,
  author={Liumeng Xue and Wei Song and Guanghui Xu and Lei Xie and Zhizheng Wu},
  title={{Building a Mixed-Lingual Neural TTS System with Only Monolingual Data}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2060--2064},
  doi={10.21437/Interspeech.2019-3191},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3191}
}