INTERSPEECH 2011
12th Annual Conference of the International Speech Communication Association

Florence, Italy
August 27-31. 2011

A Multimodal Real-Time MRI Articulatory Corpus for Speech Research

Shrikanth Narayanan, Erik Bresch, Prasanta Kumar Ghosh, Louis Goldstein, Athanasios Katsamanis, Yoon Kim, Adam Lammert, Michael Proctor, Vikram Ramanarayanan, Yinghua Zhu

University of Southern California, USA

We present MRI-TIMIT: a large-scale database of synchronized audio and real-time magnetic resonance imaging (rtMRI) data for speech research. The database currently consists of speech data acquired from two male and two female speakers of American English. Subjects' upper airways were imaged in the midsagittal plane while reading the same 460 sentence corpus used in the MOCHA-TIMIT corpus [1]. Accompanying acoustic recordings were phonemically transcribed using forced alignment. Vocal tract tissue boundaries were automatically identified in each video frame, allowing for dynamic quantification of each speaker's midsagittal articulation. The database and companion toolset provide a unique resource with which to examine articulatory-acoustic relationships in speech production.

Reference

  1. A. Wrench and W. Hardcastle, A multichannel articulatory speech database and its application for automatic speech recognition, in Proc. 5th SSP, Kloster Seeon, 2000, pp. 305308.

Full Paper

Bibliographic reference.  Narayanan, Shrikanth / Bresch, Erik / Ghosh, Prasanta Kumar / Goldstein, Louis / Katsamanis, Athanasios / Kim, Yoon / Lammert, Adam / Proctor, Michael / Ramanarayanan, Vikram / Zhu, Yinghua (2011): "A multimodal real-time MRI articulatory corpus for speech research", In INTERSPEECH-2011, 837-840.