Emotion is an internal source, which can cause the speaker recognition system performance degradation by inducing extra intraspeaker vocal variability. Several enhancements have been applied to speaker recognition system under emotional speech. However, these methods suffer from the limitation of requiring the emotional speech in training or the emotion state of the speaker in testing. This paper presents a novel approach based on the Pitch-dependent Difference Detection and Modification (PDDM) to overcome the limitation above. In this method, only the neutral speech is used to train the speaker models and the emotional state information is not needed in the testing. Experimental results on MASC show that this method enhances identification rate by 4.7% in the best case compared to the traditional speaker recognition.
Bibliographic reference. Huang, Ting / Yang, Yingchun (2008): "Applying pitch-dependent difference detection and modification to emotional speaker recognition", In INTERSPEECH-2008, 2751-2754.