Sensitivity of Quantitative RT-MRI Metrics of Vocal Tract Dynamics to Image Reconstruction Settings

Johannes Töger, Yongwan Lim, Sajan Goud Lingala, Shrikanth S. Narayanan, Krishna S. Nayak


Real-time Magnetic Resonance Imaging (RT-MRI) is a powerful method for quantitative analysis of speech. Current state-of-the-art methods use constrained reconstruction to achieve high frame rates and spatial resolution. The reconstruction involves two free parameters that can be retrospectively selected: 1) the temporal resolution and 2) the regularization parameter λ, which balances temporal regularization and fidelity to the collected MRI data. In this work, we study the sensitivity of derived quantitative measures of vocal tract function to these two parameters. Specifically, the cross-distance between the tongue tip and the alveolar ridge was investigated for different temporal resolutions (21, 42, 56 and 83 frames per second) and values of the regularization parameter. Data from one subject is included. The phrase ‘one two three four five’ was repeated 8 times at a normal pace. The results show that 1) a high regularization factor leads to lower cross-distance values 2) using a low value for the regularization parameter gives poor reproducibility and 3) a temporal resolution of at least 42 frames per second is desirable to achieve good reproducibility for all utterances in this speech task. The process employed here can be generalized to quantitative imaging of the vocal tract and other body parts.


DOI: 10.21437/Interspeech.2016-168

Cite as

Töger, J., Lim, Y., Lingala, S.G., Narayanan, S.S., Nayak, K.S. (2016) Sensitivity of Quantitative RT-MRI Metrics of Vocal Tract Dynamics to Image Reconstruction Settings. Proc. Interspeech 2016, 165-169.

Bibtex
@inproceedings{Töger+2016,
author={Johannes Töger and Yongwan Lim and Sajan Goud Lingala and Shrikanth S. Narayanan and Krishna S. Nayak},
title={Sensitivity of Quantitative RT-MRI Metrics of Vocal Tract Dynamics to Image Reconstruction Settings},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-168},
url={http://dx.doi.org/10.21437/Interspeech.2016-168},
pages={165--169}
}