ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Integrating Articulatory Information in Deep Learning-Based Text-to-Speech Synthesis

Beiming Cao, Myungjong Kim, Jan van Santen, Ted Mau, Jun Wang

Articulatory information has been shown to be effective in improving the performance of hidden Markov model (HMM)-based text-to-speech (TTS) synthesis. Recently, deep learning-based TTS has outperformed HMM-based approaches. However, articulatory information has rarely been integrated in deep learning-based TTS. This paper investigated the effectiveness of integrating articulatory movement data to deep learning-based TTS. The integration of articulatory information was achieved in two ways: (1) direct integration, where articulatory and acoustic features were the output of a deep neural network (DNN), and (2) direct integration plus forward-mapping, where the output articulatory features were mapped to acoustic features by an additional DNN; These forward-mapped acoustic features were then combined with the output acoustic features to produce the final acoustic features. Articulatory (tongue and lip) and acoustic data collected from male and female speakers were used in the experiment. Both objective measures and subjective judgment by human listeners showed the approaches integrated articulatory information outperformed the baseline approach (without using articulatory information) in terms of naturalness and speaker voice identity (voice similarity).

doi: 10.21437/Interspeech.2017-1762

Cite as: Cao, B., Kim, M., Santen, J.v., Mau, T., Wang, J. (2017) Integrating Articulatory Information in Deep Learning-Based Text-to-Speech Synthesis. Proc. Interspeech 2017, 254-258, doi: 10.21437/Interspeech.2017-1762

  author={Beiming Cao and Myungjong Kim and Jan van Santen and Ted Mau and Jun Wang},
  title={{Integrating Articulatory Information in Deep Learning-Based Text-to-Speech Synthesis}},
  booktitle={Proc. Interspeech 2017},