Direct Modelling of Speech Emotion from Raw Speech

Siddique Latif, Rajib Rana, Sara Khalifa, Raja Jurdak, Julien Epps


Speech emotion recognition is a challenging task and heavily depends on hand-engineered acoustic features, which are typically crafted to echo human perception of speech signals. However, a filter bank that is designed from perceptual evidence is not always guaranteed to be the best in a statistical modelling framework where the end goal is for example emotion classification. This has fuelled the emerging trend of learning representations from raw speech especially using deep learning neural networks. In particular, a combination of Convolution Neural Networks (CNNs) and Long Short Term Memory (LSTM) have gained great traction for the intrinsic property of LSTM in learning contextual information crucial for emotion recognition; and CNNs been used for its ability to overcome the scalability problem of regular neural networks. In this paper, we show that there are still opportunities to improve the performance of emotion recognition from the raw speech by exploiting the properties of CNN in modelling contextual information. We propose the use of parallel convolutional layers to harness multiple temporal resolutions in the feature extraction block that is jointly trained with the LSTM based classification network for the emotion recognition task. Our results suggest that the proposed model can reach the performance of CNN trained with hand-engineered features from both IEMOCAP and MSP-IMPROV datasets.


 DOI: 10.21437/Interspeech.2019-3252

Cite as: Latif, S., Rana, R., Khalifa, S., Jurdak, R., Epps, J. (2019) Direct Modelling of Speech Emotion from Raw Speech. Proc. Interspeech 2019, 3920-3924, DOI: 10.21437/Interspeech.2019-3252.


@inproceedings{Latif2019,
  author={Siddique Latif and Rajib Rana and Sara Khalifa and Raja Jurdak and Julien Epps},
  title={{Direct Modelling of Speech Emotion from Raw Speech}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3920--3924},
  doi={10.21437/Interspeech.2019-3252},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3252}
}