Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Speaker Independent Emotion Recognition by Early Fusion of Acoustic and Linguistic Features Within Ensembles

Björn Schuller, Ronald Müller, Manfred Lang, Gerhard Rigoll

Technische Universität München, Germany

Herein we present a comparison of novel concepts for a robust fusion of prosodic and verbal cues in speech emotion recognition. Thereby 276 acoustic features are extracted out of a spoken phrase. For linguistic content analysis we use the Bag-of-Words text representation. This allows for integration of acoustic and linguistic features within one vector prior to a final classification. Extensive feature selection by filter- and wrapper based methods is fulfilled. Likewise optimal sets via SVM-SFFS and single feature relevance by information gain ratio calculation are presented. Overall classification is realised by diverse ensemble approaches. Among base classifiers Kernel Machines, Decision Trees, Bayesian classifiers, and memory-based learners are found. Acoustics only tests ran on a database comprising 39 speakers for speaker independent accuracy analysis. Additionally the public Berlin Emotional Speech database is used. A further database of 4,221 movie related phrases forms the basis of acoustic and linguistic information analysis evaluation. Overall remarkable performance in the discrimination of seven discrete emotions could be observed.

Full Paper

Bibliographic reference.  Schuller, Björn / Müller, Ronald / Lang, Manfred / Rigoll, Gerhard (2005): "Speaker independent emotion recognition by early fusion of acoustic and linguistic features within ensembles", In INTERSPEECH-2005, 805-808.