An End-to-End Audio Classification System Based on Raw Waveforms and Mix-Training Strategy

Jiaxu Chen, Jing Hao, Kai Chen, Di Xie, Shicai Yang, Shiliang Pu


Audio classification can distinguish different kinds of sounds, which is helpful for intelligent applications in daily life. However, it remains a challenging task since the sound events in an audio clip is probably multiple, even overlapping. This paper introduces an end-to-end audio classification system based on raw waveforms and mix-training strategy. Compared to human-designed features which have been widely used in existing research, raw waveforms contain more complete information and are more appropriate for multi-label classification. Taking raw waveforms as input, our network consists of two variants of ResNet structure which can learn a discriminative representation. To explore the information in intermediate layers, a multi-level prediction with attention structure is applied in our model. Furthermore, we design a mix-training strategy to break the performance limitation caused by the amount of training data. Experiments show that the mean average precision of the proposed audio classification system on Audio Set dataset is 37.2%. Without using extra training data, our system exceeds the state-of-the-art multi-level attention model.


 DOI: 10.21437/Interspeech.2019-1579

Cite as: Chen, J., Hao, J., Chen, K., Xie, D., Yang, S., Pu, S. (2019) An End-to-End Audio Classification System Based on Raw Waveforms and Mix-Training Strategy. Proc. Interspeech 2019, 3644-3648, DOI: 10.21437/Interspeech.2019-1579.


@inproceedings{Chen2019,
  author={Jiaxu Chen and Jing Hao and Kai Chen and Di Xie and Shicai Yang and Shiliang Pu},
  title={{An End-to-End Audio Classification System Based on Raw Waveforms and Mix-Training Strategy}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3644--3648},
  doi={10.21437/Interspeech.2019-1579},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1579}
}