Sound Event Detection in Multichannel Audio Using Convolutional Time-Frequency-Channel Squeeze and Excitation

Wei Xia, Kazuhito Koishida


In this study, we introduce a convolutional time-frequency-channel “Squeeze and Excitation” (tfc-SE) module to explicitly model inter-dependencies between the time-frequency domain and multiple channels. The tfc-SE module consists of two parts: tf-SE block and c-SE block which are designed to provide attention on time-frequency and channel domain, respectively, for adaptively recalibrating the input feature map. The proposed tfc-SE module, together with a popular Convolutional Recurrent Neural Network (CRNN) model, are evaluated on a multi-channel sound event detection task with overlapping audio sources: the training and test data are synthesized TUT Sound Events 2018 datasets, recorded with microphone arrays. We show that the tfc-SE module can be incorporated into the CRNN model at a small additional computational cost and bring significant improvements on sound event detection accuracy. We also perform detailed ablation studies by analyzing various factors that may influence the performance of the SE blocks. We show that with the best tfc-SE block, error rate (ER) decreases from 0.2538 to 0.2026, relative 20.17% reduction of ER, and 5.72% improvement of F1 score. The results indicate that the learned acoustic embeddings with the tfc-SE module efficiently strengthen time-frequency and channel-wise feature representations to improve the discriminative performance.


 DOI: 10.21437/Interspeech.2019-1860

Cite as: Xia, W., Koishida, K. (2019) Sound Event Detection in Multichannel Audio Using Convolutional Time-Frequency-Channel Squeeze and Excitation. Proc. Interspeech 2019, 3629-3633, DOI: 10.21437/Interspeech.2019-1860.


@inproceedings{Xia2019,
  author={Wei Xia and Kazuhito Koishida},
  title={{Sound Event Detection in Multichannel Audio Using Convolutional Time-Frequency-Channel Squeeze and Excitation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3629--3633},
  doi={10.21437/Interspeech.2019-1860},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1860}
}