Towards Speech Emotion Recognition “in the Wild” Using Aggregated Corpora and Deep Multi-Task Learning

Jaebok Kim, Gwenn Englebienne, Khiet P. Truong, Vanessa Evers


One of the challenges in Speech Emotion Recognition (SER) “in the wild” is the large mismatch between training and test data (e.g. speakers and tasks). In order to improve the generalisation capabilities of the emotion models, we propose to use Multi-Task Learning (MTL) and use gender and naturalness as auxiliary tasks in deep neural networks. This method was evaluated in within-corpus and various cross-corpus classification experiments that simulate conditions “in the wild”. In comparison to Single-Task Learning (STL) based state of the art methods, we found that our MTL method proposed improved performance significantly. Particularly, models using both gender and naturalness achieved more gains than those using either gender or naturalness separately. This benefit was also found in the high-level representations of the feature space, obtained from our method proposed, where discriminative emotional clusters could be observed.


 DOI: 10.21437/Interspeech.2017-736

Cite as: Kim, J., Englebienne, G., Truong, K.P., Evers, V. (2017) Towards Speech Emotion Recognition “in the Wild” Using Aggregated Corpora and Deep Multi-Task Learning. Proc. Interspeech 2017, 1113-1117, DOI: 10.21437/Interspeech.2017-736.


@inproceedings{Kim2017,
  author={Jaebok Kim and Gwenn Englebienne and Khiet P. Truong and Vanessa Evers},
  title={Towards Speech Emotion Recognition “in the Wild” Using Aggregated Corpora and Deep Multi-Task Learning},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1113--1117},
  doi={10.21437/Interspeech.2017-736},
  url={http://dx.doi.org/10.21437/Interspeech.2017-736}
}