ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

Incorporating visual information for spoken term detection

Shahram Kalantari, David Dean, Sridha Sridharan

Spoken term detection (STD) is the task of looking up a spoken term in a large volume of speech segments. In order to provide fast search, speech segments are first indexed into an intermediate representation using speech recognition engines which provide multiple hypotheses for each speech segment. Approximate matching techniques are usually applied at the search stage to compensate the poor performance of automatic speech recognition engines during indexing. Recently, using visual information in addition to audio information has been shown to improve phone recognition performance, particularly in noisy environments. In this paper, we will make use of visual information in the form of lip movements of the speaker in indexing stage and will investigate its effect on STD performance. Particularly, we will investigate if gains in phone recognition accuracy will carry through the approximate matching stage to provide similar gains in the final audio-visual STD system over a traditional audio only approach. We will also investigate the effect of using visual information on STD performance in different noise environments.

doi: 10.21437/Interspeech.2015-203

Cite as: Kalantari, S., Dean, D., Sridharan, S. (2015) Incorporating visual information for spoken term detection. Proc. Interspeech 2015, 558-562, doi: 10.21437/Interspeech.2015-203

  author={Shahram Kalantari and David Dean and Sridha Sridharan},
  title={{Incorporating visual information for spoken term detection}},
  booktitle={Proc. Interspeech 2015},