ISCA Archive AVSP 2013
ISCA Archive AVSP 2013

Speech animation using electromagnetic articulography as motion capture data

Ingmar Steiner, Korin Richmond, Slim Ouni

Electromagnetic articulography (EMA) captures the position and orientation of a number of markers, attached to the articulators, during speech. As such, it performs the same function for speech that conventional motion capture does for full-body movements acquired with optical modalities, a long-time staple technique of the animation industry. In this paper, EMA data is processed from a motion-capture perspective and applied to the visualization of an existing multimodal corpus of articulatory data, creating a kinematic 3D model of the tongue and teeth by adapting a conventional motion capture based animation paradigm. This is accomplished using off-the-shelf, open-source software. Such an animated model can then be easily integrated into multimedia applications as a digital asset, allowing the analysis of speech production in an intuitive and accessible manner. The processing of the EMA data, its co-registration with 3D data from vocal tract magnetic resonance imaging (MRI) and dental scans, and the modeling workflow are presented in detail, and several issues discussed.

Index Terms: speech production, articulatory data, electromagnetic articulography, vocal tract, motion capture, visualization


Cite as: Steiner, I., Richmond, K., Ouni, S. (2013) Speech animation using electromagnetic articulography as motion capture data. Proc. Auditory-Visual Speech Processing, 55-60

@inproceedings{steiner13_avsp,
  author={Ingmar Steiner and Korin Richmond and Slim Ouni},
  title={{Speech animation using electromagnetic articulography as motion capture data}},
  year=2013,
  booktitle={Proc. Auditory-Visual Speech Processing},
  pages={55--60}
}