INTERSPEECH 2008
9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

Balancing Spoken Content Adaptation and Unit Length in the Recognition of Emotion and Interest

Bogdan Vlasenko (1), Björn Schuller (2), Kinfe Tadesse Mengistu (1), Gerhard Rigoll (2), Andreas Wendemuth (1)

(1) Otto-von-Guericke-Universität Magdeburg, Germany; (2) Technische Universität München, Germany

Recognition and detection of non-lexical or paralinguistic cues from speech usually uses one general model per event (emotional state, level of interest). Commonly this model is trained independent of the phonetic structure. Given sufficient data, this approach seemingly works well enough. Yet, this paper addresses the question on which phonetic level there is the onset of emotions and level of interest. We therefore compare phoneme-, word- and sentence-level analysis for emotional sentence classification by use of a large prosodic, spectral, and voice quality feature space for SVM and MFCC for HMM/GMM. Experiments also take the necessity of ASR into account to select appropriate unit-models. In experiments on the well-known public EMO-DB database, and the SUSAS and AVIC spontaneous interest corpora, we found that the emotion recognition by sentence level analysis shows the best results. We discuss the implications of these types of analysis on the design of robust emotion and interest recognition of usable human-machine interfaces (HMI).

Full Paper

Bibliographic reference.  Vlasenko, Bogdan / Schuller, Björn / Mengistu, Kinfe Tadesse / Rigoll, Gerhard / Wendemuth, Andreas (2008): "Balancing spoken content adaptation and unit length in the recognition of emotion and interest", In INTERSPEECH-2008, 805-808.