An Articulatory-Based Singing Voice Synthesis Using Tongue and Lips Imaging

Aurore Jaumard-Hakoun, Kele Xu, Clémence Leboullenger, Pierre Roussel-Ragot, Bruce Denby


Ultrasound imaging of the tongue and videos of lips movements can be used to investigate specific articulation in speech or singing voice. In this study, tongue and lips image sequences recorded during singing performance are used to predict vocal tract properties via Line Spectral Frequencies (LSF). We focused our work on traditional Corsican singing “Cantu in paghjella”. A multimodal Deep Autoencoder (DAE) extracts salient descriptors directly from tongue and lips images. Afterwards, LSF values are predicted from the most relevant of these features using a multilayer perceptron. A vocal tract model is derived from the predicted LSF, while a glottal flow model is computed from a synchronized electroglottographic recording. Articulatory-based singing voice synthesis is developed using both models. The quality of the prediction and singing voice synthesis using this method outperforms the state of the art method.


DOI: 10.21437/Interspeech.2016-385

Cite as

Jaumard-Hakoun, A., Xu, K., Leboullenger, C., Roussel-Ragot, P., Denby, B. (2016) An Articulatory-Based Singing Voice Synthesis Using Tongue and Lips Imaging. Proc. Interspeech 2016, 1467-1471.

Bibtex
@inproceedings{Jaumard-Hakoun+2016,
author={Aurore Jaumard-Hakoun and Kele Xu and Clémence Leboullenger and Pierre Roussel-Ragot and Bruce Denby},
title={An Articulatory-Based Singing Voice Synthesis Using Tongue and Lips Imaging},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-385},
url={http://dx.doi.org/10.21437/Interspeech.2016-385},
pages={1467--1471}
}