INTERSPEECH 2012
13th Annual Conference of the International Speech Communication Association

Portland, OR, USA
September 9-13, 2012

Towards Glottal Source Controllability in Expressive Speech Synthesis

Jaime Lorenzo-Trueba (1), Roberto Barra-Chicote (1), Tuomo Raitio (2), Nicolas Obin (3), Paavo Alku (2), Junichi Yamagishi (4), Juan M. Montero (1)

(1) Speech Technology Group, ETSI Telecomunicacion, Universidad Politecnica de Madrid, Spain
(2) Department of Signal Processing and Acoustics, Aalto University, Finland
(3) Sound Analysis and Synthesis, IRCAM, Paris, France
(4) CSTR, University of Edinburgh, UK

In order to obtain more human like sounding human-machine interfaces we must first be able to give them expressive capabilities in the way of emotional and stylistic features so as to closely adequate them to the intended task. If we want to replicate those features it is not enough to merely replicate the prosodic information of fundamental frequency and speaking rhythm. The proposed additional layer is the modification of the glottal model, for which we make use of the GlottHMM parameters. This paper analyzes the viability of such an approach by verifying that the expressive nuances are captured by the aforementioned features, obtaining 95% recognition rates on styled speaking and 82% on emotional speech. Then we evaluate the effect of speaker bias and recording environment on the source modeling in order to quantify possible problems when analyzing multi-speaker databases. Finally we propose a speaking styles separation for Spanish based on prosodic features and check its perceptual significance.

Index Terms: expressive speech synthesis, speaking style, glottal source modeling

Full Paper

Bibliographic reference.  Lorenzo-Trueba, Jaime / Barra-Chicote, Roberto / Raitio, Tuomo / Obin, Nicolas / Alku, Paavo / Yamagishi, Junichi / Montero, Juan M. (2012): "Towards glottal source controllability in expressive speech synthesis", In INTERSPEECH-2012, 1620-1623.