Conditional Variational Auto-Encoder for Text-Driven Expressive AudioVisual Speech Synthesis

Sara Dahmani, Vincent Colotte, Valérian Girard, Slim Ouni


In recent years, the performance of speech synthesis systems has been improved thanks to deep learning-based models, but generating expressive audiovisual speech is still an open issue. The variational auto-encoders (VAE)s are recently proposed to learn latent representations of data. In this paper, we present a system for expressive text-to-audiovisual speech synthesis that learns a latent embedding space of emotions using a conditional generative model based on the variational auto-encoder framework. When conditioned on textual input, the VAE is able to learn an embedded representation that captures emotion characteristics from the signal, while being invariant to the phonetic content of the utterances. We applied this method in an unsupervised manner to generate duration, acoustic and visual features of speech. This conditional variational auto-encoder (CVAE) has been used to blend emotions together. This model was able to generate nuances of a given emotion or to generate new emotions that do not exist in our database. We conducted three perceptive experiments to evaluate our findings.


 DOI: 10.21437/Interspeech.2019-2848

Cite as: Dahmani, S., Colotte, V., Girard, V., Ouni, S. (2019) Conditional Variational Auto-Encoder for Text-Driven Expressive AudioVisual Speech Synthesis. Proc. Interspeech 2019, 2598-2602, DOI: 10.21437/Interspeech.2019-2848.


@inproceedings{Dahmani2019,
  author={Sara Dahmani and Vincent Colotte and Valérian Girard and Slim Ouni},
  title={{Conditional Variational Auto-Encoder for Text-Driven Expressive AudioVisual Speech Synthesis}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2598--2602},
  doi={10.21437/Interspeech.2019-2848},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2848}
}