Discriminating between High-Arousal and Low-Arousal Emotional States of Mind using Acoustic Analysis

Esther Ramdinmawii, V. K. Mittal


´╗┐Identification of emotions from human speech can be attempted by focusing upon three aspects of emotional speech: valence, arousal and dominance. In this paper, changes in the production characteristics of emotional speech are examined to discriminate between the high-arousal and low-arousal emotions, and amongst emotions within each of these categories. Basic emotions anger, happy and fear are examined in high-arousal, and neutral speech and sad emotion in low-arousal emotional speech. Discriminating changes are examined first in the excitation source characteristics, i.e., instantaneous fundamental frequency (F0) derived using the zero-frequency filtering (ZFF) method. Differences observed in the spectrograms are then validated by examining changes in the combined characteristics of the source and the vocal tract filter, i.e., strength of excitation (SoE), derived using ZFF method, and signal energy features. Emotions within each category are distinguished by examining changes in two scarcely explored discriminating features, namely, zero-crossing rate and the ratios amongst the spectral sub-band energies computed using short-time Fourier transform. Effectiveness of these features in discriminating emotions is validated using two emotion databases, Berlin EMODB (German) and IIT-KGP-SESC (Telugu). Proposed features exhibit highly encouraging results in discriminating these emotions. This study can be helpful towards automatic classification of emotions from speech.


 DOI: 10.21437/SMM.2018-1

Cite as: Ramdinmawii, E., Mittal, V.K. (2018) Discriminating between High-Arousal and Low-Arousal Emotional States of Mind using Acoustic Analysis. Proc. Workshop on Speech, Music and Mind 2018, 1-5, DOI: 10.21437/SMM.2018-1.


@inproceedings{Ramdinmawii2018,
  author={Esther Ramdinmawii and V. K. Mittal},
  title={Discriminating between High-Arousal and Low-Arousal Emotional States of Mind using Acoustic Analysis},
  year=2018,
  booktitle={Proc. Workshop on Speech, Music and Mind 2018},
  pages={1--5},
  doi={10.21437/SMM.2018-1},
  url={http://dx.doi.org/10.21437/SMM.2018-1}
}