Eighth ISCA Workshop on Speech Synthesis

Barcelona, Catalonia, Spain
August 31-September 2, 2013

Expressive Speech Synthesis: Synthesising Ambiguity

Matthew P. Aylett (1,2), Blaise Potard (2), Christopher J. Pidcock (2)

(1) University of Edinburgh, UK; (2) CereProc Ltd, UK

Previous work in HCI has shown that ambiguity, normally avoided in interaction design, can contribute to a user’s engagement by increasing interest and uncertainty. In this work, we create and evaluate synthetic utterances where there is a conflict between text content, and the emotion in the voice. We show that: 1) text content measurably alters the negative/positive perception of a spoken utterance, 2) changes in voice quality also produce this effect, 3) when the voice quality and text content are conflicting the result is a synthesised ambiguous utterance. Results were analysed using an evaluation/activation space. Whereas the effect of text content was restricted to the negative/positive dimension (valence), voice quality also had a significant effect on how active or passive the utterance was perceived (activation). Index Terms: speech synthesis, unit selection, expressive speech synthesis, emotion, prosody

Full Paper

Bibliographic reference.  Aylett, Matthew P. / Potard, Blaise / Pidcock, Christopher J. (2013): "Expressive speech synthesis: synthesising ambiguity", In SSW8, 217-221.