Recent works have explored deep architectures for learning multimodal speech representation (e.g. audio and images, articulation and audio) in a supervised way. Here we investigate the role of combining different speech modalities, i.e. audio and visual information representing the lips’ movements, in a weakly supervised way using Siamese networks and lexical same-different side information. In particular, we ask whether one modality can benefit from the other to provide a richer representation for phone recognition in a weakly supervised setting. We introduce mono-task and multi-task methods for merging speech and visual modalities for phone recognition. The mono-task learning consists in applying a Siamese network on the concatenation of the two modalities, while the multi-task learning receives several different combinations of modalities at train time. We show that multi-task learning enhances discriminability for visual and multimodal inputs while minimally impacting auditory inputs. Furthermore, we present a qualitative analysis of the obtained phone embeddings, and show that cross-modal visual input can improve the discriminability of phonological features which are visually discernable (rounding, open/close, labial place of articulation), resulting in representations that are closer to abstract linguistic features than those based on audio only.
Cite as: Chaabouni, R., Dunbar, E., Zeghidour, N., Dupoux, E. (2017) Learning Weakly Supervised Multimodal Phoneme Embeddings. Proc. Interspeech 2017, 2218-2222, doi: 10.21437/Interspeech.2017-1689
@inproceedings{chaabouni17_interspeech, author={Rahma Chaabouni and Ewan Dunbar and Neil Zeghidour and Emmanuel Dupoux}, title={{Learning Weakly Supervised Multimodal Phoneme Embeddings}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={2218--2222}, doi={10.21437/Interspeech.2017-1689} }