Multiple Instance Deep Learning for Weakly Supervised Small-Footprint Audio Event Detection

Shao-Yen Tseng, Juncheng Li, Yun Wang, Florian Metze, Joseph Szurley, Samarjit Das


State-of-the-art audio event detection (AED) systems rely on supervised learning using strongly labeled data. However, this dependence severely limits scalability to large-scale datasets where fine resolution annotations are too expensive to obtain. In this paper, we propose a small-footprint multiple instance learning (MIL) framework for multi-class AED using weakly annotated labels. The proposed MIL framework uses audio embeddings extracted from a pre-trained convolutional neural network as input features. We show that by using audio embeddings the MIL framework can be implemented using a simple DNN with performance comparable to recurrent neural networks. We evaluate our approach by training an audio tagging system using a subset of AudioSet, which is a large collection of weakly labeled YouTube video excerpts. Combined with a late-fusion approach, we improve the F1 score of a baseline audio tagging system by 17%. We show that audio embeddings extracted by the convolutional neural networks significantly boost the performance of all MIL models. This framework reduces the model complexity of the AED system and is suitable for applications where computational resources are limited.


 DOI: 10.21437/Interspeech.2018-1120

Cite as: Tseng, S., Li, J., Wang, Y., Metze, F., Szurley, J., Das, S. (2018) Multiple Instance Deep Learning for Weakly Supervised Small-Footprint Audio Event Detection. Proc. Interspeech 2018, 3279-3283, DOI: 10.21437/Interspeech.2018-1120.


@inproceedings{Tseng2018,
  author={Shao-Yen Tseng and Juncheng Li and Yun Wang and Florian Metze and Joseph Szurley and Samarjit Das},
  title={Multiple Instance Deep Learning for Weakly Supervised Small-Footprint Audio Event Detection},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3279--3283},
  doi={10.21437/Interspeech.2018-1120},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1120}
}