A Three-Layer Emotion Perception Model for Valence and Arousal-Based Detection from Multilingual Speech

Xingfeng Li, Masato Akagi


Automated emotion detection from speech has recently shifted from monolingual to multilingual tasks for human-like interaction in real-life where a system can handle more than a single input language. However, most work on monolingual emotion detection is difficult to generalize in multiple languages, because the optimal feature sets of the work differ from one language to another. Our study proposes a framework to design, implement and validate an emotion detection system using multiple corpora. A continuous dimensional space of valence and arousal is first used to describe the emotions. A three-layer model incorporated with fuzzy inference systems is then used to estimate two dimensions. Speech features derived from prosodic, spectral and glottal waveform are examined and selected to capture emotional cues. The results of this new system outperformed the existing state-of-the-art system by yielding a smaller mean absolute error and higher correlation between estimates and human evaluators. Moreover, results for speaker independent validation are comparable to human evaluators.


 DOI: 10.21437/Interspeech.2018-1820

Cite as: Li, X., Akagi, M. (2018) A Three-Layer Emotion Perception Model for Valence and Arousal-Based Detection from Multilingual Speech. Proc. Interspeech 2018, 3643-3647, DOI: 10.21437/Interspeech.2018-1820.


@inproceedings{Li2018,
  author={Xingfeng Li and Masato Akagi},
  title={A Three-Layer Emotion Perception Model for Valence and Arousal-Based Detection from Multilingual Speech},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3643--3647},
  doi={10.21437/Interspeech.2018-1820},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1820}
}