INTERSPEECH 2006 - ICSLP
Generating emotions in speech is currently a hot topic of research given the requirement of modern human-machine interaction systems to produce expressive speech.
We present the EmoVoice system, which implements acoustic rules to simulate seven basic emotions in neutral speech. It uses the pitchsynchronous time-scaling (PSTS) of the excitation signal to change the prosody and the most relevant glottal source parameters related to voice quality. The system also transforms other parameters of the vocal source signal to produce the irregular voicing quality. The correlation of the speech parameters with the basic emotions was derived from measurements of the glottal parameters and from results reported by other authors. The evaluation of the system showed that it can generate recognizable emotions but improvements are still necessary to discriminate some pairs of emotions.
Bibliographic reference. Cabral, João P. / Oliveira, Luís C. (2006): "Emovoice: a system to generate emotions in speech", In INTERSPEECH-2006, paper 1645-Wed2BuP.3.