Fast ASR-free and Almost Zero-resource Keyword Spotting Using DTW and CNNs for Humanitarian Monitoring

Raghav Menon, Herman Kamper, John Quinn, Thomas Niesler


We use dynamic time warping (DTW) as supervision for training a convolutional neural network (CNN) based keyword spotting system using a small set of spoken isolated keywords. The aim is to allow rapid deployment of a keyword spotting system in a new language to support urgent United Nations (UN) relief programmes in parts of Africa where languages are extremely under-resourced and the development of annotated speech resources is infeasible. First, we use 1920 recorded keywords (40 keyword types, 34 minutes of speech) as exemplars in a DTW-based template matching system and apply it to untranscribed broadcast speech. Then, we use the resulting DTW scores as targets to train a CNN on the same unlabelled speech. In this way we use just 34 minutes of labelled speech, but leverage a large amount of unlabelled data for training. While the resulting CNN keyword spotter cannot match the performance of the DTW-based system, it substantially outperforms a CNN classifier trained only on the keywords, improving the area under the ROC curve from 0.54 to 0.64. Because our CNN system is several orders of magnitude faster at runtime than the DTW system, it represents the most viable keyword spotter on this extremely limited dataset.


 DOI: 10.21437/Interspeech.2018-1580

Cite as: Menon, R., Kamper, H., Quinn, J., Niesler, T. (2018) Fast ASR-free and Almost Zero-resource Keyword Spotting Using DTW and CNNs for Humanitarian Monitoring. Proc. Interspeech 2018, 2608-2612, DOI: 10.21437/Interspeech.2018-1580.


@inproceedings{Menon2018,
  author={Raghav Menon and Herman Kamper and John Quinn and Thomas Niesler},
  title={Fast ASR-free and Almost Zero-resource Keyword Spotting Using DTW and CNNs for Humanitarian Monitoring},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2608--2612},
  doi={10.21437/Interspeech.2018-1580},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1580}
}