Acoustic Scene Classification Using Teacher-Student Learning with Soft-Labels

Hee-Soo Heo, Jee-weon Jung, Hye-jin Shim, Ha-Jin Yu


Acoustic scene classification identifies an input segment into one of the pre-defined classes using spectral information. The spectral information of acoustic scenes may not be mutually exclusive due to common acoustic properties across different classes, such as babble noises included in both airports and shopping malls. However, conventional training procedure based on one-hot labels does not consider the similarities between different acoustic scenes. We exploit teacher-student learning with the purpose to derive soft-labels that consider common acoustic properties among different acoustic scenes. In teacher-student learning, the teacher network produces soft-labels, based on which the student network is trained. We investigate various methods to extract soft-labels that better represent similarities across different scenes. Such attempts include extracting soft-labels from multiple audio segments that are defined as an identical acoustic scene. Experimental results demonstrate the potential of our approach, showing a classification accuracy of 77.36% on the DCASE 2018 task 1 validation set.


 DOI: 10.21437/Interspeech.2019-1989

Cite as: Heo, H., Jung, J., Shim, H., Yu, H. (2019) Acoustic Scene Classification Using Teacher-Student Learning with Soft-Labels. Proc. Interspeech 2019, 614-618, DOI: 10.21437/Interspeech.2019-1989.


@inproceedings{Heo2019,
  author={Hee-Soo Heo and Jee-weon Jung and Hye-jin Shim and Ha-Jin Yu},
  title={{Acoustic Scene Classification Using Teacher-Student Learning with Soft-Labels}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={614--618},
  doi={10.21437/Interspeech.2019-1989},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1989}
}