SPEAK YOUR MIND! Towards Imagined Speech Recognition with Hierarchical Deep Learning

Pramit Saha, Muhammad Abdul-Mageed, Sidney Fels


Speech-related Brain Computer Interface (BCI) technologies provide effective vocal communication strategies for controlling devices through speech commands interpreted from brain signals. In order to infer imagined speech from active thoughts, we propose a novel hierarchical deep learning BCI system for subject-independent classification of 11 speech tokens including phonemes and words. Our novel approach exploits predicted articulatory information of six phonological categories (e.g., nasal, bilabial) as an intermediate step for classifying the phonemes and words, thereby finding discriminative signal responsible for natural speech synthesis. The proposed network is composed of hierarchical combination of spatial and temporal CNN cascaded with a deep autoencoder. Our best models on the KARA database achieve an average accuracy of 83.42% across the six different binary phonological classification tasks, and 53.36% for the individual token identification task, significantly outperforming our baselines. Ultimately, our work suggests the possible existence of a brain imagery footprint for the underlying articulatory movement related to different sounds that can be used to aid imagined speech decoding.


 DOI: 10.21437/Interspeech.2019-3041

Cite as: Saha, P., Abdul-Mageed, M., Fels, S. (2019) SPEAK YOUR MIND! Towards Imagined Speech Recognition with Hierarchical Deep Learning. Proc. Interspeech 2019, 141-145, DOI: 10.21437/Interspeech.2019-3041.


@inproceedings{Saha2019,
  author={Pramit Saha and Muhammad Abdul-Mageed and Sidney Fels},
  title={{SPEAK YOUR MIND! Towards Imagined Speech Recognition with Hierarchical Deep Learning}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={141--145},
  doi={10.21437/Interspeech.2019-3041},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3041}
}