ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion

Disong Wang, Songxiang Liu, Lifa Sun, Xixin Wu, Xunying Liu, Helen Meng

Though significant progress has been made for the voice conversion (VC) of typical speech, VC for atypical speech, e.g., dysarthric and second-language (L2) speech, remains a challenge, since it involves correcting for atypical prosody while maintaining speaker identity. To address this issue, we propose a VC system with explicit prosodic modelling and deep speaker embedding (DSE) learning. First, a speech-encoder strives to extract robust phoneme embeddings from atypical speech. Second, a prosody corrector takes in phoneme embeddings to infer typical phoneme duration and pitch values. Third, a conversion model takes phoneme embeddings and typical prosody features as inputs to generate the converted speech, conditioned on the target DSE that is learned via speaker encoder or speaker adaptation. Extensive experiments demonstrate that speaker adaptation can achieve higher speaker similarity, and the speaker encoder based conversion model can greatly reduce dysarthric and non-native pronunciation patterns with improved speech intelligibility. A comparison of speech recognition results between the original dysarthric speech and converted speech show that absolute reduction of 47.6% character error rate (CER) and 29.3% word error rate (WER) can be achieved.


doi: 10.21437/Interspeech.2021-285

Cite as: Wang, D., Liu, S., Sun, L., Wu, X., Liu, X., Meng, H. (2021) Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion. Proc. Interspeech 2021, 4813-4817, doi: 10.21437/Interspeech.2021-285

@inproceedings{wang21ja_interspeech,
  author={Disong Wang and Songxiang Liu and Lifa Sun and Xixin Wu and Xunying Liu and Helen Meng},
  title={{Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4813--4817},
  doi={10.21437/Interspeech.2021-285}
}