Sixth International Conference on Spoken Language Processing
(ICSLP 2000)

Beijing, China
October 16-20, 2000

Speaker Dependent Emotion Recognition Using Speech Signals

Bong-Seok Kang, Chul-Hee Han, Sang-Tae Lee (1), Dae-Hee Youn, Chungyong Lee

Department of Electrical and Computer Engineering, Yonsei University, Seoul, Korea
(1) Korea Research Institute of Standards and Science, Yusong, Taejon, Korea

This paper examines three algorithms to recognize speaker’s emotion using the speech signals. Target emotions are happiness, sadness, anger, fear, boredom and neutral state. MLB (Maximum-Likelihood Bayes), NN (Nearest Neighbor) and HMM (Hidden Markov Model) algorithms are used as the pattern matching techniques. In all cases, pitch and energy are used as the features. The feature vectors for MLB and NN are composed of pitch mean, pitch standard deviation, energy mean, energy standard deviation, etc. For HMM, vectors of delta pitch with delta-delta pitch and delta energy with delta-delta energy are used. A corpus of emotional speech data was recorded and the subjective evaluation of the data was performed by 23 untrained listeners. The subjective recognition result was 56% and was compared with the classifiers’ recognition rates. MLB, NN, and HMM classifiers achieved recognition rates of 68.9%, 69.3%, and 89.1%, respectively, for the speaker dependent and context-independent classification.

Full Paper

Bibliographic reference.  Kang, Bong-Seok / Han, Chul-Hee / Lee, Sang-Tae / Youn, Dae-Hee / Lee, Chungyong (2000): "Speaker dependent emotion recognition using speech signals", In ICSLP-2000, vol.2, 383-386.