Using a Biomechanical Model and Articulatory Data for the Numerical Production of Vowels

Saeed Dabbaghchian, Marc Arnela, Olov Engwall, Oriol Guasch, Ian Stavness, Pierre Badin


We introduce a framework to study speech production using a biomechanical model of the human vocal tract, ArtiSynth. Electromagnetic articulography data was used as input to an inverse tracking simulation that estimates muscle activations to generate 3D jaw and tongue postures corresponding to the target articulator positions. For acoustic simulations, the vocal tract geometry is needed, but since the vocal tract is a cavity rather than a physical object, its geometry does not explicitly exist in a biomechanical model. A fully-automatic method to extract the 3D geometry (surface mesh) of the vocal tract by blending geometries of the relevant articulators has therefore been developed. This automatic extraction procedure is essential, since a method with manual intervention is not feasible for large numbers of simulations or for generation of dynamic sounds, such as diphthongs. We then simulated the vocal tract acoustics by using the Finite Element Method (FEM). This requires a high quality vocal tract mesh without irregular geometry or self-intersections. We demonstrate that the framework is applicable to acoustic FEM simulations of a wide range of vocal tract deformations. In particular we present results for cardinal vowel production, with muscle activations, vocal tract geometry, and acoustic simulations.


DOI: 10.21437/Interspeech.2016-1500

Cite as

Dabbaghchian, S., Arnela, M., Engwall, O., Guasch, O., Stavness, I., Badin, P. (2016) Using a Biomechanical Model and Articulatory Data for the Numerical Production of Vowels. Proc. Interspeech 2016, 3569-3573.

Bibtex
@inproceedings{Dabbaghchian+2016,
author={Saeed Dabbaghchian and Marc Arnela and Olov Engwall and Oriol Guasch and Ian Stavness and Pierre Badin},
title={Using a Biomechanical Model and Articulatory Data for the Numerical Production of Vowels},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1500},
url={http://dx.doi.org/10.21437/Interspeech.2016-1500},
pages={3569--3573}
}