INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Automated Vocal Emotion Recognition Using Phoneme Class Specific Features

Géza Kiss, Jan P. H. van Santen

Oregon Health & Science University, USA

Methods for automated vocal emotion recognition often use acoustic feature vectors that are computed for each frame in an utterance, and global statistics based on these acoustic feature vectors. However, at least two considerations argue for usage of phoneme class specific features for emotion recognition. First, there are well-known effects of phoneme class on some of these features. Second, it is plausible that emotion influences the speech signal in ways that differ between phoneme classes. A new method based on the concept of phoneme class specific features is proposed in which different features are selected for regions associated with different phoneme classes and then optimally combined, using machine learning algorithms. A small but significant improvement was found when this method was compared with an otherwise identical method in which features were used uniformly over different phoneme classes.

Full Paper

Bibliographic reference.  Kiss, Géza / Santen, Jan P. H. van (2010): "Automated vocal emotion recognition using phoneme class specific features", In INTERSPEECH-2010, 1161-1164.