ISCA Archive Interspeech 2009
ISCA Archive Interspeech 2009

How to improve TTS systems for emotional expressivity

Antonio Rui Ferreira Rebordao, Mostafa Al Masum Shaikh, Keikichi Hirose, Nobuaki Minematsu

Several experiments have been carried out that revealed weaknesses of the current Text-To-Speech (TTS) systems in their emotional expressivity. Although some TTS systems allow XML-based representations of prosodic and/or phonetic variables, few publications considered, as a pre-processing stage, the use of intelligent text processing to detect affective information that can be used to tailor the parameters needed for emotional expressivity. This paper describes a technique for an automatic prosodic parameterization based on affective clues. This technique recognizes the affective information conveyed in a text and, accordingly to its emotional connotation, assigns appropriate pitch accents and other prosodic parameters by XML-tagging. This pre-processing assists the TTS system to generate synthesized speech that contains emotional clues. The experimental results are encouraging and suggest the possibility of suitable emotional expressivity in speech synthesis.


doi: 10.21437/Interspeech.2009-191

Cite as: Rebordao, A.R.F., Shaikh, M.A.M., Hirose, K., Minematsu, N. (2009) How to improve TTS systems for emotional expressivity. Proc. Interspeech 2009, 524-527, doi: 10.21437/Interspeech.2009-191

@inproceedings{rebordao09_interspeech,
  author={Antonio Rui Ferreira Rebordao and Mostafa Al Masum Shaikh and Keikichi Hirose and Nobuaki Minematsu},
  title={{How to improve TTS systems for emotional expressivity}},
  year=2009,
  booktitle={Proc. Interspeech 2009},
  pages={524--527},
  doi={10.21437/Interspeech.2009-191}
}