End-to-End Deep Learning Framework for Speech Paralinguistics Detection Based on Perception Aware Spectrum

Danwei Cai, Zhidong Ni, Wenbo Liu, Weicheng Cai, Gang Li, Ming Li


In this paper, we propose an end-to-end deep learning framework to detect speech paralinguistics using perception aware spectrum as input. Existing studies show that speech under cold has distinct variations of energy distribution on low frequency components compared with the speech under ‘healthy’ condition. This motivates us to use perception aware spectrum as the input to an end-to-end learning framework with small scale dataset. In this work, we try both Constant Q Transform (CQT) spectrum and Gammatone spectrum in different end-to-end deep learning networks, where both spectrums are able to closely mimic the human speech perception and transform it into 2D images. Experimental results show the effectiveness of the proposed perception aware spectrum with end-to-end deep learning approach on Interspeech 2017 Computational Paralinguistics Cold sub-Challenge. The final fusion result of our proposed method is 8% better than that of the provided baseline in terms of UAR.


 DOI: 10.21437/Interspeech.2017-1445

Cite as: Cai, D., Ni, Z., Liu, W., Cai, W., Li, G., Li, M. (2017) End-to-End Deep Learning Framework for Speech Paralinguistics Detection Based on Perception Aware Spectrum. Proc. Interspeech 2017, 3452-3456, DOI: 10.21437/Interspeech.2017-1445.


@inproceedings{Cai2017,
  author={Danwei Cai and Zhidong Ni and Wenbo Liu and Weicheng Cai and Gang Li and Ming Li},
  title={End-to-End Deep Learning Framework for Speech Paralinguistics Detection Based on Perception Aware Spectrum},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3452--3456},
  doi={10.21437/Interspeech.2017-1445},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1445}
}