Variational Autoencoders for Learning Latent Representations of Speech Emotion: A Preliminary Study

Siddique Latif, Rajib Rana, Junaid Qadir, Julien Epps


Learning the latent representation of data in unsupervised fashion is a very interesting process that provides relevant features for enhancing the performance of a classifier. For speech emotion recognition tasks, generating effective features is crucial. Currently, handcrafted features are mostly used for speech emotion recognition, however, features learned automatically using deep learning have shown strong success in many problems, especially in image processing. In particular, deep generative models such as Variational Autoencoders (VAEs) have gained enormous success in generating features for natural images. Inspired by this, we propose VAEs for deriving the latent representation of speech signals and use this representation to classify emotions. To the best of our knowledge, we are the first to propose VAEs for speech emotion classification. Evaluations on the IEMOCAP dataset demonstrate that features learned by VAEs can produce state-of-the-art results for speech emotion classification.


 DOI: 10.21437/Interspeech.2018-1568

Cite as: Latif, S., Rana, R., Qadir, J., Epps, J. (2018) Variational Autoencoders for Learning Latent Representations of Speech Emotion: A Preliminary Study. Proc. Interspeech 2018, 3107-3111, DOI: 10.21437/Interspeech.2018-1568.


@inproceedings{Latif2018,
  author={Siddique Latif and Rajib Rana and Junaid Qadir and Julien Epps},
  title={Variational Autoencoders for Learning Latent Representations of Speech Emotion: A Preliminary Study},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3107--3111},
  doi={10.21437/Interspeech.2018-1568},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1568}
}