International Conference on Auditory-Visual Speech Processing 2008

Tangalooma Wild Dolphin Resort, Moreton Island, Queensland, Australia
September 26-29, 2008

Audio-Visual Feature Selection and Reduction for Emotion Classification

Sanaul Haq, Philip J. B. Jackson, James D. Edge

Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, UK

Recognition of expressed emotion from speech and facial gestures was investigated in experiments on an audio-visual emotional database. A total of 106 audio and 240 visual features were extracted and then features were selected with Plus l-Take Away r algorithm based on Bhattacharyya distance criterion. In the second step, linear transformation methods, principal component analysis (PCA) and linear discriminant analysis (LDA), were applied to the selected features and Gaussian classifiers were used for classification of emotions. The performance was higher for LDA features compared to PCA features. The visual features performed better than audio features, for both PCA and LDA. Across a range of fusion schemes, the audio-visual feature results were close to that of visual features. A highest recognition rate of 53% was achieved with audio features, 98% with visual features, and 98% with audio-visual features selected by Bhattacharyya distance and transformed by LDA.1 Index Terms: emotion recognition, multimodal feature selection, principal component analysis 1. Introduction

Full Paper

Bibliographic reference.  Haq, Sanaul / Jackson, Philip J. B. / Edge, James D. (2008): "Audio-visual feature selection and reduction for emotion classification", In AVSP-2008, 185-190.