Adversarial Auto-Encoders for Speech Based Emotion Recognition

Saurabh Sahu, Rahul Gupta, Ganesh Sivaraman, Wael AbdAlmageed, Carol Espy-Wilson


Recently, generative adversarial networks and adversarial auto-encoders have gained a lot of attention in machine learning community due to their exceptional performance in tasks such as digit classification and face recognition. They map the auto-encoder’s bottleneck layer output (termed as code vectors) to different noise Probability Distribution Functions (PDFs), that can be further regularized to cluster based on class information. In addition, they also allow a generation of synthetic samples by sampling the code vectors from the mapped PDFs. Inspired by these properties, we investigate the application of adversarial auto-encoders to the domain of emotion recognition. Specifically, we conduct experiments on the following two aspects: (i) their ability to encode high dimensional feature vector representations for emotional utterances into a compressed space (with a minimal loss of emotion class discriminability in the compressed space), and (ii) their ability to regenerate synthetic samples in the original feature space, to be later used for purposes such as training emotion recognition classifiers. We demonstrate promise of adversarial auto-encoders with regards to these aspects on the Interactive Emotional Dyadic Motion Capture (IEMOCAP) corpus and present our analysis.


 DOI: 10.21437/Interspeech.2017-1421

Cite as: Sahu, S., Gupta, R., Sivaraman, G., AbdAlmageed, W., Espy-Wilson, C. (2017) Adversarial Auto-Encoders for Speech Based Emotion Recognition. Proc. Interspeech 2017, 1243-1247, DOI: 10.21437/Interspeech.2017-1421.


@inproceedings{Sahu2017,
  author={Saurabh Sahu and Rahul Gupta and Ganesh Sivaraman and Wael AbdAlmageed and Carol Espy-Wilson},
  title={Adversarial Auto-Encoders for Speech Based Emotion Recognition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1243--1247},
  doi={10.21437/Interspeech.2017-1421},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1421}
}