Emotion Detection from Speech and Text

Mikel de Velasco, Raquel Justo, Josu Antón, Mikel Carrilero, M. Inés Torres


The main goal of this work is to carry out automatic emotion detection from speech by using both acoustic and textual information. For doing that a set of audios were extracted from a TV show were different guests discuss about topics of current interest. The selected audios were transcribed and annotated in terms of emotional status using a crowdsourcing platform. A 3 dimensional model was used to define an specific emotional status in order to pick up the nuances in what the speaker is expressing instead of being restricted to a predefined set of discrete categories. Different sets of acoustic parameters were considered to obtain the input vectors for a neural network. To represent each sequence of words, a models based on word embeddings was used. Different deep learning architectures were tested providing promising results, although having a corpus of a limited size.


 DOI: 10.21437/IberSPEECH.2018-15

Cite as: de Velasco, M., Justo, R., Antón, J., Carrilero, M., Torres, M.I. (2018) Emotion Detection from Speech and Text. Proc. IberSPEECH 2018, 68-71, DOI: 10.21437/IberSPEECH.2018-15.


@inproceedings{de Velasco2018,
  author={Mikel {de Velasco} and Raquel Justo and Josu Antón and Mikel Carrilero and M. Inés Torres},
  title={{Emotion Detection from Speech and Text}},
  year=2018,
  booktitle={Proc. IberSPEECH 2018},
  pages={68--71},
  doi={10.21437/IberSPEECH.2018-15},
  url={http://dx.doi.org/10.21437/IberSPEECH.2018-15}
}