Sub-Band Convolutional Neural Networks for Small-Footprint Spoken Term Classification

Chieh-Chi Kao, Ming Sun, Yixin Gao, Shiv Vitaladevuni, Chao Wang


This paper proposes a Sub-band Convolutional Neural Network for spoken term classification. Convolutional neural networks (CNNs) have proven to be very effective in acoustic applications such as spoken term classification, keyword spotting, speaker identification, acoustic event detection, etc. Unlike applications in computer vision, the spatial invariance property of 2D convolutional kernels does not fit acoustic applications well since the meaning of a specific 2D kernel varies a lot along the feature axis in an input feature map. We propose a sub-band CNN architecture to apply different convolutional kernels on each feature sub-band, which makes the overall computation more efficient. Experimental results show that the computational efficiency brought by sub-band CNN is more beneficial for small-footprint models. Compared to a baseline full band CNN for spoken term classification on a publicly available Speech Commands dataset, the proposed sub-band CNN architecture reduces the computation by 39.7% on commands classification, and 49.3% on digits classification with accuracy maintained.


 DOI: 10.21437/Interspeech.2019-1766

Cite as: Kao, C., Sun, M., Gao, Y., Vitaladevuni, S., Wang, C. (2019) Sub-Band Convolutional Neural Networks for Small-Footprint Spoken Term Classification. Proc. Interspeech 2019, 2195-2199, DOI: 10.21437/Interspeech.2019-1766.


@inproceedings{Kao2019,
  author={Chieh-Chi Kao and Ming Sun and Yixin Gao and Shiv Vitaladevuni and Chao Wang},
  title={{Sub-Band Convolutional Neural Networks for Small-Footprint Spoken Term Classification}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2195--2199},
  doi={10.21437/Interspeech.2019-1766},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1766}
}