My Lips Are Concealed: Audio-Visual Speech Enhancement Through Obstructions

Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman


Our objective is an audio-visual model for separating a single speaker from a mixture of sounds such as other speakers and background noise. Moreover, we wish to hear the speaker even when the visual cues are temporarily absent due to occlusion.

To this end we introduce a deep audio-visual speech enhancement network that is able to separate a speaker’s voice by conditioning on both the speaker’s lip movements and/or a representation of their voice. The voice representation can be obtained by either (i) enrollment, or (ii) by self-enrollment — learning the representation on-the-fly given sufficient unobstructed visual input. The model is trained by blending audios, and by introducing artificial occlusions around the mouth region that prevent the visual modality from dominating.

The method is speaker-independent, and we demonstrate it on real examples of speakers unheard (and unseen) during training. The method also improves over previous models in particular for cases of occlusion in the visual modality.


 DOI: 10.21437/Interspeech.2019-3114

Cite as: Afouras, T., Chung, J.S., Zisserman, A. (2019) My Lips Are Concealed: Audio-Visual Speech Enhancement Through Obstructions. Proc. Interspeech 2019, 4295-4299, DOI: 10.21437/Interspeech.2019-3114.


@inproceedings{Afouras2019,
  author={Triantafyllos Afouras and Joon Son Chung and Andrew Zisserman},
  title={{My Lips Are Concealed: Audio-Visual Speech Enhancement Through Obstructions}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4295--4299},
  doi={10.21437/Interspeech.2019-3114},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3114}
}