Towards a Method of Dynamic Vocal Tract Shapes Generation by Combining Static 3D and Dynamic 2D MRI Speech Data

Ioannis K. Douros, Anastasiia Tsukanova, Karyna Isaieva, Pierre-André Vuissoz, Yves Laprie


We present an algorithm for augmenting the shape of the vocal tract using 3D static and 2D dynamic speech MRI data. While static 3D images have better resolution and provide spatial information, 2D dynamic images capture the transitions. The aim of this work is to combine strong points of these two types of data to obtain better image quality of 2D dynamic images and extend the 2D dynamic images to the 3D domain.

To produce a 3D dynamic consonant-vowel (CV) sequence, our algorithm takes as input the 2D CV transition and the static 3D targets for C and V. To obtain the enhanced sequence of images, the first step is to find a transformation between the 2D images and the mid-sagittal slice of the acoustically corresponding 3D image stack, and then find a transformation between neighbouring sagittal slices in the 3D static image stack. Combination of these transformations allows producing the final set of images. In the present study we first examined the transformation from the 3D mid-sagittal frame to the 2D video in order to improve image quality and then we examined the extension of the 2D video to the 3rd dimension with the aim to enrich spatial information.


 DOI: 10.21437/Interspeech.2019-2880

Cite as: Douros, I.K., Tsukanova, A., Isaieva, K., Vuissoz, P., Laprie, Y. (2019) Towards a Method of Dynamic Vocal Tract Shapes Generation by Combining Static 3D and Dynamic 2D MRI Speech Data. Proc. Interspeech 2019, 879-883, DOI: 10.21437/Interspeech.2019-2880.


@inproceedings{Douros2019,
  author={Ioannis K. Douros and Anastasiia Tsukanova and Karyna Isaieva and Pierre-André Vuissoz and Yves Laprie},
  title={{Towards a Method of Dynamic Vocal Tract Shapes Generation by Combining Static 3D and Dynamic 2D MRI Speech Data}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={879--883},
  doi={10.21437/Interspeech.2019-2880},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2880}
}