ISCA Archive Interspeech 2020
ISCA Archive Interspeech 2020

Jointly Fine-Tuning “BERT-Like” Self Supervised Models to Improve Multimodal Speech Emotion Recognition

Shamane Siriwardhana, Andrew Reis, Rivindu Weerasekera, Suranga Nanayakkara

Multimodal emotion recognition from the speech is an important area in affective computing. Fusing multiple data modalities and learning representations with limited amounts of labeled data is a challenging task. In this paper, we explore the use of modality specific “BERT-like” pretrained Self Supervised Learning (SSL) architectures to represent both speech and text modalities for the task of multimodal speech emotion recognition. By conducting experiments on three publicly available datasets (IEMOCAP, CMU-MOSEI, and CMU-MOSI), we show that jointly fine-tuning “BERT-like” SSL architectures achieve state-of-the-art (SOTA) results. We also evaluate two methods of fusing speech and text modalities and show that a simple fusion mechanism can outperform more complex ones when using SSL models that have similar architectural properties to BERT.


doi: 10.21437/Interspeech.2020-1212

Cite as: Siriwardhana, S., Reis, A., Weerasekera, R., Nanayakkara, S. (2020) Jointly Fine-Tuning “BERT-Like” Self Supervised Models to Improve Multimodal Speech Emotion Recognition. Proc. Interspeech 2020, 3755-3759, doi: 10.21437/Interspeech.2020-1212

@inproceedings{siriwardhana20_interspeech,
  author={Shamane Siriwardhana and Andrew Reis and Rivindu Weerasekera and Suranga Nanayakkara},
  title={{Jointly Fine-Tuning “BERT-Like” Self Supervised Models to Improve Multimodal Speech Emotion Recognition}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={3755--3759},
  doi={10.21437/Interspeech.2020-1212}
}