The article proposes a real-time technique for visualizing tongue motion driven by ultrasound image sequences. Local feature description is used to follow characteristic speckle patterns in a set of mid-sagittal contour points in an ultrasound image sequence, which are then used as markers for describing movements of the tongue. A 3D tongue model is subsequently driven by the motion data extracted from the ultrasound image sequences. The modal warping technique is used for real-time tongue deformation visualization. The resulting system will be useful in a variety of domains including speech production study, articulation training, educational scenarios, etc. Some parts of the interface are still being developed; we will show preliminary results in the demonstration.
Bibliographic reference. Xu, Kele / Yang, Yin / Jaumard-Hakoun, A. / Adda-Decker, Martine / Amelot, A. / Kork, S. K. Al / Crevier-Buchman, L. / Chawah, P. / Dreyfus, G. / Fux, T. / Pillot-Loiseau, C. / Roussel, P. / Stone, M. / Denby, B. (2014): "3d tongue motion visualization based on ultrasound image sequences", In INTERSPEECH-2014, 1482-1483.