Expressive Control of Singing Voice Synthesis Using Musical Contexts and a Parametric F0 Model

Luc Ardaillon, Celine Chabot-Canet, Axel Roebel


Expressive singing voice synthesis requires an appropriate control of both prosodic and timbral aspects. While it is desirable to have an intuitive control over the expressive parameters, synthesis systems should be able to produce convincing results directly from a score. As countless interpretations of a same score are possible, the system should also target a particular singing style, which implies to mimic the various strategies used by different singers. Among the control parameters involved, the pitch ( F0) should be modeled in priority. In previous work, a parametric F0 model with intuitive controls has been proposed, but no automatic way to choose the model parameters was given. In the present work, we propose a new approach for modeling singing style, based on parametric templates selection. In this approach, the F0 parameters and phonemes durations are extracted from annotated recordings, along with a rich description of contextual informations, and stored to form a database of parametric templates. This database is then used to build a model of the singing style using decision-trees. At the synthesis stage, appropriate parameters are then selected according to the target contexts. The results produced by this approach have been evaluated by means of a listening test.


DOI: 10.21437/Interspeech.2016-1317

Cite as

Ardaillon, L., Chabot-Canet, C., Roebel, A. (2016) Expressive Control of Singing Voice Synthesis Using Musical Contexts and a Parametric F0 Model. Proc. Interspeech 2016, 1250-1254.

Bibtex
@inproceedings{Ardaillon+2016,
author={Luc Ardaillon and Celine Chabot-Canet and Axel Roebel},
title={Expressive Control of Singing Voice Synthesis Using Musical Contexts and a Parametric  F0 Model},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1317},
url={http://dx.doi.org/10.21437/Interspeech.2016-1317},
pages={1250--1254}
}