Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition

Naoya Takahashi, Michael Gygli, Beat Pfister, Luc Van Gool


We propose a novel method for Acoustic Event Recognition (AER). In contrast to speech, sounds coming from acoustic events may be produced by a wide variety of sources. Furthermore, distinguishing them often requires analyzing an extended time period due to the lack of a clear sub-word unit. In order to incorporate the long-time frequency structure for AER, we introduce a convolutional neural network (CNN) with a large input field. In contrast to previous works, this enables to train audio event detection end-to-end. Our architecture is inspired by the success of VGGNet [1] and uses small, 3×3 convolutions, but more depth than previous methods in AER. In order to prevent over-fitting and to take full advantage of the modeling capabilities of our network, we further propose a novel data augmentation method to introduce data variation. Experimental results show that our CNN significantly outperforms state of the art methods including Bag of Audio Words (BoAW) and classical CNNs, achieving a 16% absolute improvement.


DOI: 10.21437/Interspeech.2016-805

Cite as

Takahashi, N., Gygli, M., Pfister, B., Gool, L.V. (2016) Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition. Proc. Interspeech 2016, 2982-2986.

Bibtex
@inproceedings{Takahashi+2016,
author={Naoya Takahashi and Michael Gygli and Beat Pfister and Luc Van Gool},
title={Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Recognition},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-805},
url={http://dx.doi.org/10.21437/Interspeech.2016-805},
pages={2982--2986}
}