Investigating the Lombard Effect Influence on End-to-End Audio-Visual Speech Recognition

Pingchuan Ma, Stavros Petridis, Maja Pantic


Several audio-visual speech recognition models have been recently proposed which aim to improve the robustness over audio-only models in the presence of noise. However, almost all of them ignore the impact of the Lombard effect, i.e., the change in speaking style in noisy environments which aims to make speech more intelligible and affects both the acoustic characteristics of speech and the lip movements. In this paper, we investigate the impact of the Lombard effect in audio-visual speech recognition. To the best of our knowledge, this is the first work which does so using end-to-end deep architectures and presents results on unseen speakers. Our results show that properly modelling Lombard speech is always beneficial. Even if a relatively small amount of Lombard speech is added to the training set then the performance in a real scenario, where noisy Lombard speech is present, can be significantly improved. We also show that the standard approach followed in the literature, where a model is trained and tested on noisy plain speech, provides a correct estimate of the video-only performance and slightly underestimates the audio-visual performance. In case of audio-only approaches, performance is overestimated for SNRs higher than -3dB and underestimated for lower SNRs.


 DOI: 10.21437/Interspeech.2019-2726

Cite as: Ma, P., Petridis, S., Pantic, M. (2019) Investigating the Lombard Effect Influence on End-to-End Audio-Visual Speech Recognition. Proc. Interspeech 2019, 4090-4094, DOI: 10.21437/Interspeech.2019-2726.


@inproceedings{Ma2019,
  author={Pingchuan Ma and Stavros Petridis and Maja Pantic},
  title={{Investigating the Lombard Effect Influence on End-to-End Audio-Visual Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4090--4094},
  doi={10.21437/Interspeech.2019-2726},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2726}
}