ISCA Archive Interspeech 2007
ISCA Archive Interspeech 2007

A system for transforming the emotion in speech: combining data-driven conversion techniques for prosody and voice quality

Zeynep Inanoglu, Steve Young

This paper describes a system that combines independent transformation techniques to endow a neutral utterance with some required target emotion. The system consists of three modules that are each trained on a limited amount of speech data and act on differing temporal layers. F0 contours are modelled and generated using context-sensitive syllable HMMs, while durations are transformed using phone-based relative decision trees. For spectral conversion which is applied at the segmental level, two methods were investigated: a GMM-based voice conversion approach and a codebook selection approach. Converted test data were evaluated for three emotions using an independent emotion classifier as well as perceptual listening tests. The listening test results show that perception of sadness output by our system was comparable with the perception of human sad speech while the perception of surprise and anger was around 5% worse than that of a human speaker.


doi: 10.21437/Interspeech.2007-226

Cite as: Inanoglu, Z., Young, S. (2007) A system for transforming the emotion in speech: combining data-driven conversion techniques for prosody and voice quality. Proc. Interspeech 2007, 490-493, doi: 10.21437/Interspeech.2007-226

@inproceedings{inanoglu07_interspeech,
  author={Zeynep Inanoglu and Steve Young},
  title={{A system for transforming the emotion in speech: combining data-driven conversion techniques for prosody and voice quality}},
  year=2007,
  booktitle={Proc. Interspeech 2007},
  pages={490--493},
  doi={10.21437/Interspeech.2007-226}
}