Modeling Labial Coarticulation with Bidirectional Gated Recurrent Networks and Transfer Learning

Théo Biasutto--Lervat, Sara Dahmani, Slim Ouni


In this study, we investigate how to learn labial coarticulation to generate a sparse representation of the face from speech. To do so, we experiment a sequential deep learning model, bidirectional gated recurrent networks, which have reached nice result in addressing the articulatory inversion problem and so should be able to handle coarticulation effects. As acquiring audiovisual corpora is expensive and time-consuming, we designed our solution to counteract the lack of data. Firstly, we have used phonetic information (phoneme label and respective duration) as input to ensure speaker independence, and in second hand, we have experimented around pretraining strategies to reach acceptable performances. We demonstrate how a careful initialization of the last layers of the network can greatly ease the training and help to handle coarticulation effect. This initialization relies on dimensionality reduction strategies, allowing injecting knowledge of useful latent representation of the visual data into the network. We focused on two data-driven tools (PCA and autoencoder) and one hand-crafted latent space coming from animation community, blendshapes decomposition. We have trained and evaluated the model with a corpus consisting of 4 hours of French speech, and we have gotten an average RMSE close to 1.3mm.


 DOI: 10.21437/Interspeech.2019-2097

Cite as: Biasutto--Lervat, T., Dahmani, S., Ouni, S. (2019) Modeling Labial Coarticulation with Bidirectional Gated Recurrent Networks and Transfer Learning. Proc. Interspeech 2019, 2608-2612, DOI: 10.21437/Interspeech.2019-2097.


@inproceedings{Biasutto--Lervat2019,
  author={Théo Biasutto--Lervat and Sara Dahmani and Slim Ouni},
  title={{Modeling Labial Coarticulation with Bidirectional Gated Recurrent Networks and Transfer Learning}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2608--2612},
  doi={10.21437/Interspeech.2019-2097},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2097}
}