ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

LiRA: Learning Visual Speech Representations from Audio Through Self-Supervision

Pingchuan Ma, Rodrigo Mira, Stavros Petridis, Björn W. Schuller, Maja Pantic

The large amount of audiovisual content being shared online today has drawn substantial attention to the prospect of audio-visual self-supervised learning. Recent works have focused on each of these modalities separately, while others have attempted to model both simultaneously in a cross-modal fashion. However, comparatively little attention has been given to leveraging one modality as a training objective to learn from the other. In this work, we propose Learning visual speech Representations from Audio via self-supervision (LiRA). Specifically, we train a ResNet+Conformer model to predict acoustic features from unlabelled visual speech. We find that this pre-trained model can be leveraged towards word-level and sentence-level lip-reading through feature extraction and fine-tuning experiments. We show that our approach significantly outperforms other self-supervised methods on the Lip Reading in the Wild (LRW) dataset and achieves state-of-the-art performance on Lip Reading Sentences 2 (LRS2) using only a fraction of the total labelled data.


doi: 10.21437/Interspeech.2021-1360

Cite as: Ma, P., Mira, R., Petridis, S., Schuller, B.W., Pantic, M. (2021) LiRA: Learning Visual Speech Representations from Audio Through Self-Supervision. Proc. Interspeech 2021, 3011-3015, doi: 10.21437/Interspeech.2021-1360

@inproceedings{ma21c_interspeech,
  author={Pingchuan Ma and Rodrigo Mira and Stavros Petridis and Björn W. Schuller and Maja Pantic},
  title={{LiRA: Learning Visual Speech Representations from Audio Through Self-Supervision}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={3011--3015},
  doi={10.21437/Interspeech.2021-1360}
}