SpeechYOLO: Detection and Localization of Speech Objects

Yael Segal, Tzeviya Sylvia Fuchs, Joseph Keshet


In this paper, we propose to apply object detection methods from the vision domain on the speech recognition domain, by treating audio fragments as objects. More specifically, we present SpeechYOLO, which is inspired by the YOLO algorithm [1] for object detection in images. The goal of SpeechYOLO is to localize boundaries of utterances within the input signal, and to correctly classify them. Our system is composed of a convolutional neural network, with a simple least-mean-squares loss function. We evaluated the system on several keyword spotting tasks, that include corpora of read speech and spontaneous speech. Our system compares favorably with other algorithms trained for both localization and classification.


 DOI: 10.21437/Interspeech.2019-1749

Cite as: Segal, Y., Fuchs, T.S., Keshet, J. (2019) SpeechYOLO: Detection and Localization of Speech Objects. Proc. Interspeech 2019, 4210-4214, DOI: 10.21437/Interspeech.2019-1749.


@inproceedings{Segal2019,
  author={Yael Segal and Tzeviya Sylvia Fuchs and Joseph Keshet},
  title={{SpeechYOLO: Detection and Localization of Speech Objects}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4210--4214},
  doi={10.21437/Interspeech.2019-1749},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1749}
}