ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Audio Scene Classification with Deep Recurrent Neural Networks

Huy Phan, Philipp Koch, Fabrice Katzberg, Marco Maass, Radoslaw Mazur, Alfred Mertins

We introduce in this work an efficient approach for audio scene classification using deep recurrent neural networks. An audio scene is firstly transformed into a sequence of high-level label tree embedding feature vectors. The vector sequence is then divided into multiple subsequences on which a deep GRU-based recurrent neural network is trained for sequence-to-label classification. The global predicted label for the entire sequence is finally obtained via aggregation of subsequence classification outputs. We will show that our approach obtains an F1-score of 97.7% on the LITIS Rouen dataset, which is the largest dataset publicly available for the task. Compared to the best previously reported result on the dataset, our approach is able to reduce the relative classification error by 35.3%.


doi: 10.21437/Interspeech.2017-101

Cite as: Phan, H., Koch, P., Katzberg, F., Maass, M., Mazur, R., Mertins, A. (2017) Audio Scene Classification with Deep Recurrent Neural Networks. Proc. Interspeech 2017, 3043-3047, doi: 10.21437/Interspeech.2017-101

@inproceedings{phan17_interspeech,
  author={Huy Phan and Philipp Koch and Fabrice Katzberg and Marco Maass and Radoslaw Mazur and Alfred Mertins},
  title={{Audio Scene Classification with Deep Recurrent Neural Networks}},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3043--3047},
  doi={10.21437/Interspeech.2017-101}
}