Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural Networks

Huy Phan, Lars Hertel, Marco Maass, Alfred Mertins


We present in this paper a simple, yet efficient convolutional neural network (CNN) architecture for robust audio event recognition. Opposing to deep CNN architectures with multiple convolutional and pooling layers topped up with multiple fully connected layers, the proposed network consists of only three layers: convolutional, pooling, and softmax layer. Two further features distinguish it from the deep architectures that have been proposed for the task: varying-size convolutional filters at the convolutional layer and 1-max pooling scheme at the pooling layer. In intuition, the network tends to select the most discriminative features from the whole audio signals for recognition. Our proposed CNN not only shows state-of-the-art performance on the standard task of robust audio event recognition but also outperforms other deep architectures up to 4.5% in terms of recognition accuracy, which is equivalent to 76.3% relative error reduction.


DOI: 10.21437/Interspeech.2016-123

Cite as

Phan, H., Hertel, L., Maass, M., Mertins, A. (2016) Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural Networks. Proc. Interspeech 2016, 3653-3657.

Bibtex
@inproceedings{Phan+2016,
author={Huy Phan and Lars Hertel and Marco Maass and Alfred Mertins},
title={Robust Audio Event Recognition with 1-Max Pooling Convolutional Neural Networks},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-123},
url={http://dx.doi.org/10.21437/Interspeech.2016-123},
pages={3653--3657}
}