Audio Classification of Bit-Representation Waveform

Masaki Okawa, Takuya Saito, Naoki Sawada, Hiromitsu Nishizaki


This study investigated the waveform representation for audio signal classification. Recently, many studies on audio waveform classification such as acoustic event detection and music genre classification have been published. Most studies on audio waveform classification have proposed the use of a deep learning (neural network) framework. Generally, a frequency analysis method such as Fourier transform is applied to extract the frequency or spectral information from the input audio waveform before inputting the raw audio waveform into the neural network. In contrast to these previous studies, in this paper, we propose a novel waveform representation method, in which audio waveforms are represented as a bit sequence, for audio classification. In our experiment, we compare the proposed bit representation waveform, which is directly given to a neural network, to other representations of audio waveforms such as a raw audio waveform and a power spectrum with two classification tasks: one is an acoustic event classification task and the other is a sound/music classification task. The experimental results showed that the bit representation waveform achieved the best classification performance for both the tasks.


 DOI: 10.21437/Interspeech.2019-1855

Cite as: Okawa, M., Saito, T., Sawada, N., Nishizaki, H. (2019) Audio Classification of Bit-Representation Waveform. Proc. Interspeech 2019, 2553-2557, DOI: 10.21437/Interspeech.2019-1855.


@inproceedings{Okawa2019,
  author={Masaki Okawa and Takuya Saito and Naoki Sawada and Hiromitsu Nishizaki},
  title={{Audio Classification of Bit-Representation Waveform}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2553--2557},
  doi={10.21437/Interspeech.2019-1855},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1855}
}