Attention Based CLDNNs for Short-Duration Acoustic Scene Classification

Jinxi Guo, Ning Xu, Li-Jia Li, Abeer Alwan


Recently, neural networks with deep architecture have been widely applied to acoustic scene classification. Both Convolutional Neural Networks (CNNs) and Long Short-Term Memory Networks (LSTMs) have shown improvements over fully connected Deep Neural Networks (DNNs). Motivated by the fact that CNNs, LSTMs and DNNs are complimentary in their modeling capability, we apply the CLDNNs (Convolutional, Long Short-Term Memory, Deep Neural Networks) framework to short-duration acoustic scene classification in a unified architecture. The CLDNNs take advantage of frequency modeling with CNNs, temporal modeling with LSTM, and discriminative training with DNNs. Based on the CLDNN architecture, several novel attention-based mechanisms are proposed and applied on the LSTM layer to predict the importance of each time step. We evaluate the proposed method on the truncated version of the 2016 TUT acoustic scenes dataset which consists of recordings from 15 different scenes. By using CLDNNs with bidirectional LSTM, we achieve higher performance compared to the conventional neural network architectures. Moreover, by combining the attention-weighted output with LSTM final time step output, significant improvement can be further achieved.


 DOI: 10.21437/Interspeech.2017-440

Cite as: Guo, J., Xu, N., Li, L., Alwan, A. (2017) Attention Based CLDNNs for Short-Duration Acoustic Scene Classification. Proc. Interspeech 2017, 469-473, DOI: 10.21437/Interspeech.2017-440.


@inproceedings{Guo2017,
  author={Jinxi Guo and Ning Xu and Li-Jia Li and Abeer Alwan},
  title={Attention Based CLDNNs for Short-Duration Acoustic Scene Classification},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={469--473},
  doi={10.21437/Interspeech.2017-440},
  url={http://dx.doi.org/10.21437/Interspeech.2017-440}
}