INTERSPEECH 2007
8th Annual Conference of the International Speech Communication Association

Antwerp, Belgium
August 27-31, 2007

Combining Frame and Turn-Level Information for Robust Recognition of Emotions Within Speech

Bogdan Vlasenko (1), Björn Schuller (2), Andreas Wendemuth (1), Gerhard Rigoll (2)

(1) Otto-von-Guericke-University, Germany
(2) Technische Universität München, Germany

Current approaches to the recognition of emotion within speech usually use statistic feature information obtained by application of functionals on turn- or chunk levels. Yet, it is well known that thereby important information on temporal sub-layers as the frame-level is lost. We therefore investigate the benefits of integration of such information within turn-level feature space. For frame-level analysis we use GMM for classification and 39 MFCC and energy features with CMS. In a subsequent step output scores are fed forward into a 1.4k large-feature-space turn-level SVM emotion recognition engine. Thereby we use a variety of Low-Level-Descriptors and functionals to cover prosodic, speech quality, and articulatory aspects. Extensive test-runs are carried out on the public databases EMO-DB and SUSAS. Speaker-independent analysis is faced by speaker normalization. Overall results highly emphasize the benefits of feature integration on diverse time scales.

Full Paper

Bibliographic reference.  Vlasenko, Bogdan / Schuller, Björn / Wendemuth, Andreas / Rigoll, Gerhard (2007): "Combining frame and turn-level information for robust recognition of emotions within speech", In INTERSPEECH-2007, 2249-2252.