INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Spectro-Temporal Modulations for Robust Speech Emotion Recognition

Lan-Ying Yeh, Tai-Shih Chi

National Chiao Tung University, Taiwan

Speech emotion recognition is mostly considered in clean speech. In this paper, joint spectro-temporal features (RS features) are extracted from an auditory model and are applied to detect the emotion status of noisy speech. The noisy speech is derived from the Berlin Emotional Speech database with added white and babble noises under various SNR levels. The clean train/noisy test scenario is investigated to simulate conditions with unknown noisy sources. The sequential forward floating selection (SFFS) method is adopted to demonstrate the redundancy of RS features and further dimensionality reduction is conducted. Compared to conventional MFCCs plus prosodic features, RS features show higher recognition rates especially in low SNR conditions.

Full Paper

Bibliographic reference.  Yeh, Lan-Ying / Chi, Tai-Shih (2010): "Spectro-temporal modulations for robust speech emotion recognition", In INTERSPEECH-2010, 789-792.