Neural Network Distillation on IoT Platforms for Sound Event Detection

Gianmarco Cerutti, Rahul Prasad, Alessio Brutti, Elisabetta Farella


In most classification tasks, wide and deep neural networks perform and generalize better than their smaller counterparts, in particular when they are exposed to large and heterogeneous training sets. However, in the emerging field of Internet of Things memory footprint and energy budget pose severe limits on the size and complexity of the neural models that can be implemented on embedded devices. The Student-Teacher approach is an attractive strategy to distill knowledge from a large network into smaller ones, that can fit on low-energy low-complexity embedded IoT platforms. In this paper, we consider the outdoor sound event detection task as a use case. Building upon the VGGish network, we investigate different distillation strategies to substantially reduce the classifier’s size and computational cost with minimal performance losses. Experiments on the UrbanSound8K dataset show that extreme compression factors (up to 4.2 • 10-4 for parameters and 1.2 • 10-3 for operations with respect to VGGish) can be achieved, limiting the accuracy degradation from 75% to 70%. Finally, we compare different embedded platforms to analyze the trade-off between available resources and achievable accuracy.


 DOI: 10.21437/Interspeech.2019-2394

Cite as: Cerutti, G., Prasad, R., Brutti, A., Farella, E. (2019) Neural Network Distillation on IoT Platforms for Sound Event Detection. Proc. Interspeech 2019, 3609-3613, DOI: 10.21437/Interspeech.2019-2394.


@inproceedings{Cerutti2019,
  author={Gianmarco Cerutti and Rahul Prasad and Alessio Brutti and Elisabetta Farella},
  title={{Neural Network Distillation on IoT Platforms for Sound Event Detection}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3609--3613},
  doi={10.21437/Interspeech.2019-2394},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2394}
}