At the Border of Acoustics and Linguistics: Bag-of-Audio-Words for the Recognition of Emotions in Speech

Maximilian Schmitt, Fabien Ringeval, Björn Schuller


Recognition of natural emotion in speech is a challenging task. Different methods have been proposed to tackle this complex task, such as acoustic feature brute-forcing or even end-to-end learning. Recently, bag-of-audio-words (BoAW) representations of acoustic low-level descriptors (LLDs) have been employed successfully in the domain of acoustic event classification and other audio recognition tasks. In this approach, feature vectors of acoustic LLDs are quantised according to a learnt codebook of audio words. Then, a histogram of the occurring ‘words’ is built. Despite their massive potential, BoAW have not been thoroughly studied in emotion recognition. Here, we propose a method using BoAW created only of mel-frequency cepstral coefficients (MFCCs). Support vector regression is then used to predict emotion continuously in time and value, such as in the dimensions arousal and valence. We compare this approach with the computation of functionals based on the MFCCs and perform extensive evaluations on the RECOLA database, which features spontaneous and natural emotions. Results show that, BoAW representation of MFCCs does not only perform significantly better than functionals, but also outperforms by far most of recently published deep learning approaches, including convolutional and recurrent networks.


DOI: 10.21437/Interspeech.2016-1124

Cite as

Schmitt, M., Ringeval, F., Schuller, B. (2016) At the Border of Acoustics and Linguistics: Bag-of-Audio-Words for the Recognition of Emotions in Speech. Proc. Interspeech 2016, 495-499.

Bibtex
@inproceedings{Schmitt+2016,
author={Maximilian Schmitt and Fabien Ringeval and Björn Schuller},
title={At the Border of Acoustics and Linguistics: Bag-of-Audio-Words for the Recognition of Emotions in Speech},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1124},
url={http://dx.doi.org/10.21437/Interspeech.2016-1124},
pages={495--499}
}