On the Contributions of Visual and Textual Supervision in Low-Resource Semantic Speech Retrieval

Ankita Pasad, Bowen Shi, Herman Kamper, Karen Livescu


Recent work has shown that speech paired with images can be used to learn semantically meaningful speech representations even without any textual supervision. In real-world low-resource settings, however, we often have access to some transcribed speech. We study whether and how visual grounding is useful in the presence of varying amounts of textual supervision. In particular, we consider the task of semantic speech retrieval in a low-resource setting. We use a previously studied data set and task, where models are trained on images with spoken captions and evaluated on human judgments of semantic relevance. We propose a multitask learning approach to leverage both visual and textual modalities, with visual supervision in the form of keyword probabilities from an external tagger. We find that visual grounding is helpful even in the presence of textual supervision, and we analyze this effect over a range of sizes of transcribed data sets. With ~5 hours of transcribed speech, we obtain 23% higher average precision when also using visual supervision.


 DOI: 10.21437/Interspeech.2019-3051

Cite as: Pasad, A., Shi, B., Kamper, H., Livescu, K. (2019) On the Contributions of Visual and Textual Supervision in Low-Resource Semantic Speech Retrieval. Proc. Interspeech 2019, 4195-4199, DOI: 10.21437/Interspeech.2019-3051.


@inproceedings{Pasad2019,
  author={Ankita Pasad and Bowen Shi and Herman Kamper and Karen Livescu},
  title={{On the Contributions of Visual and Textual Supervision in Low-Resource Semantic Speech Retrieval}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4195--4199},
  doi={10.21437/Interspeech.2019-3051},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3051}
}