Unsupervised Temporal Feature Learning Based on Sparse Coding Embedded BoAW for Acoustic Event Recognition

Liwen Zhang, Jiqing Han, Shiwen Deng


The performance of an Acoustic Event Recognition (AER) system highly depends on the statistical information and the temporal dynamics in the audio signals. Although the traditional Bag of Audio Words (BoAW) and the Gaussian Mixture Models (GMM) approaches can obtain more statistics information by aggregating multiple frame-level descriptors of an audio segment compared with the frame-level feature learning methods, its temporal information is unreserved. Recently, more and more Deep Neural Networks (DNN) based AER methods have been proposed to effectively capture the temporal information in audio signals and achieved better performance, however, these methods usually required the manually annotated labels and fixed-length input during feature learning process. In this paper, we proposed a novel unsupervised temporal feature learning method, which can effectively capture the temporal dynamics for an entire audio signal with arbitrary duration by building direct connections between the BoAW histograms sequence and its time indexes using a non-linear Support Vector Regression (SVR) model. Furthermore, to make the feature representation have a better signal reconstruction ability, we embedded the sparse coding approach in the conventional BoAW framework. Compared with the BoAW and Convolutional Neural Network (CNN) baselines, experimental results showed our method brings improvements of 9.7% and 4.1% respectively.


 DOI: 10.21437/Interspeech.2018-1243

Cite as: Zhang, L., Han, J., Deng, S. (2018) Unsupervised Temporal Feature Learning Based on Sparse Coding Embedded BoAW for Acoustic Event Recognition. Proc. Interspeech 2018, 3284-3288, DOI: 10.21437/Interspeech.2018-1243.


@inproceedings{Zhang2018,
  author={Liwen Zhang and Jiqing Han and Shiwen Deng},
  title={Unsupervised Temporal Feature Learning Based on Sparse Coding Embedded BoAW for Acoustic Event Recognition},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3284--3288},
  doi={10.21437/Interspeech.2018-1243},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1243}
}