Transfer Learning for Improving Speech Emotion Classification Accuracy

Siddique Latif, Rajib Rana, Shahzad Younis, Junaid Qadir, Julien Epps


The majority of existing speech emotion recognition research focuses on automatic emotion detection using training and testing data from same corpus collected under the same conditions. The performance of such systems has been shown to drop significantly in cross-corpus and cross-language scenarios. To address the problem, this paper exploits a transfer learning technique to improve the performance of speech emotion recognition systems that is novel in cross-language and cross-corpus scenarios. Evaluations on five different corpora in three different languages show that Deep Belief Networks (DBNs) offer better accuracy than previous approaches on cross-corpus emotion recognition, relative to a Sparse Autoencoder and Support Vector Machine (SVM) baseline system. Results also suggest that using a large number of languages for training and using a small fraction of the target data in training can significantly boost accuracy compared with baseline also for the corpus with limited training examples.


 DOI: 10.21437/Interspeech.2018-1625

Cite as: Latif, S., Rana, R., Younis, S., Qadir, J., Epps, J. (2018) Transfer Learning for Improving Speech Emotion Classification Accuracy. Proc. Interspeech 2018, 257-261, DOI: 10.21437/Interspeech.2018-1625.


@inproceedings{Latif2018,
  author={Siddique Latif and Rajib Rana and Shahzad Younis and Junaid Qadir and Julien Epps},
  title={Transfer Learning for Improving Speech Emotion Classification Accuracy},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={257--261},
  doi={10.21437/Interspeech.2018-1625},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1625}
}