An Attention Pooling Based Representation Learning Method for Speech Emotion Recognition

Pengcheng Li, Yan Song, Ian McLoughlin, Wu Guo, Lirong Dai


This paper proposes an attention pooling based representation learning method for speech emotion recognition (SER). The emotional representation is learned in an end-to-end fashion by applying a deep convolutional neural network (CNN) directly to spectrograms extracted from speech utterances. Motivated by the success of GoogLeNet, two groups of filters with different shapes are designed to capture both temporal and frequency domain context information from the input spectrogram. The learned features are concatenated and fed into the subsequent convolutional layers. To learn the final emotional representation, a novel attention pooling method is further proposed. Compared with the existing pooling methods, such as max-pooling and average-pooling, the proposed attention pooling can effectively incorporate class-agnostic bottom-up and class-specific top-down, attention maps. We conduct extensive evaluations on benchmark IEMOCAP data to assess the effectiveness of the proposed representation. Results demonstrate a recognition performance of 71.8% weighted accuracy (WA) and 68% unweighted accuracy (UA) over four emotions, which outperforms the state-of-the-art method by about 3% absolute for WA and 4% for UA.


 DOI: 10.21437/Interspeech.2018-1242

Cite as: Li, P., Song, Y., McLoughlin, I., Guo, W., Dai, L. (2018) An Attention Pooling Based Representation Learning Method for Speech Emotion Recognition. Proc. Interspeech 2018, 3087-3091, DOI: 10.21437/Interspeech.2018-1242.


@inproceedings{Li2018,
  author={Pengcheng Li and Yan Song and Ian McLoughlin and Wu Guo and Lirong Dai},
  title={An Attention Pooling Based Representation Learning Method for Speech Emotion Recognition},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3087--3091},
  doi={10.21437/Interspeech.2018-1242},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1242}
}