ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Speaker-Specific Biomechanical Model-Based Investigation of a Simple Speech Task Based on Tagged-MRI

Keyi Tang, Negar M. Harandi, Jonghye Woo, Georges El Fakhri, Maureen Stone, Sidney Fels

We create two 3D biomechanical speaker models matched to medical image data of two healthy English speakers. We use a new, hybrid registration technique that morphs a generic 3D, biomechanical model to medical images. The generic model of the head and neck includes jaw, tongue, soft-palate, epiglottis, lips and face, and is capable of simulating upper-airway biomechanics. We use cine and tagged magnetic resonance (MR) images captured while our volunteers repeated a simple utterance (/ə-gis/) synchronized to a metronome. We simulate our models based on internal tongue tissue trajectories that we extract from tagged MR images, and use in an inverse solver. For areas without tracked data points, the registered generic model moves based on the computed muscle activations. Our modeling efforts include a wide range of speech organs illustrating the coupling complexity between the oral anatomy during simple speech utterances.


doi: 10.21437/Interspeech.2017-1576

Cite as: Tang, K., Harandi, N.M., Woo, J., Fakhri, G.E., Stone, M., Fels, S. (2017) Speaker-Specific Biomechanical Model-Based Investigation of a Simple Speech Task Based on Tagged-MRI. Proc. Interspeech 2017, 2282-2286, doi: 10.21437/Interspeech.2017-1576

@inproceedings{tang17b_interspeech,
  author={Keyi Tang and Negar M. Harandi and Jonghye Woo and Georges El Fakhri and Maureen Stone and Sidney Fels},
  title={{Speaker-Specific Biomechanical Model-Based Investigation of a Simple Speech Task Based on Tagged-MRI}},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2282--2286},
  doi={10.21437/Interspeech.2017-1576}
}