INTERSPEECH 2008
9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

Speaker-Independent Emotion Recognition Based on Feature Vector Classification

Jeong-Sik Park (1), Ji-Hwan Kim (2), Sang-Min Yoon (1), Yung-Hwan Oh (1)

(1) KAIST, Korea; (2) Sogang University, Korea

This paper proposes a new feature vector classification for speech emotion recognition. The conventional feature vector classification applied to speaker identification categorized feature vectors as overlapped and non-overlapped. This method discards all of the overlapped vectors in model training, while non-overlapped vectors are used to reconstruct corresponding speaker models. Although the conventional classification showed strong performance in speaker identification, it has limitations in constructing robust models when the number of overlapped vectors is significantly increased such as in emotion recognition. To overcome such a drawback, we propose a more sophisticated classification method which selects discriminative vectors among overlapped vectors and adds the vectors in model reconstruction. On experiments based on an LDC emotion corpus, our classification approach exhibited superior performance when compared to the conventional method.

Full Paper

Bibliographic reference.  Park, Jeong-Sik / Kim, Ji-Hwan / Yoon, Sang-Min / Oh, Yung-Hwan (2008): "Speaker-independent emotion recognition based on feature vector classification", In INTERSPEECH-2008, 2775-2778.