End-to-End Speech Command Recognition with Capsule Network

Jaesung Bae, Dae-Shik Kim


In recent years, neural networks have become one of the common approaches used in speech recognition(SR), with SR systems based on Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) achieving the state-of-the-art results in various SR benchmarks. Especially, since CNNs are capable of capturing the local features effectively, they are applied to tasks which have relatively short-term dependencies, such as keyword spotting or phoneme-level sequence recognition. However, one limitation of CNNs is that, with max-pooling, they do not consider the pose relationship between low-level features. Motivated by this problem, we apply the capsule network to capture the spatial relationship and pose information of speech spectrogram features in both frequency and time axes. We show that our proposed end-to-end SR system with capsule networks on one-second speech commands dataset achieves better results on both clean and noise-added test than baseline CNN models.


 DOI: 10.21437/Interspeech.2018-1888

Cite as: Bae, J., Kim, D. (2018) End-to-End Speech Command Recognition with Capsule Network. Proc. Interspeech 2018, 776-780, DOI: 10.21437/Interspeech.2018-1888.


@inproceedings{Bae2018,
  author={Jaesung Bae and Dae-Shik Kim},
  title={End-to-End Speech Command Recognition with Capsule Network},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={776--780},
  doi={10.21437/Interspeech.2018-1888},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1888}
}