9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

The Entropy of the Articulatory Phonological Code: Recognizing Gestures from Tract Variables

Xiaodan Zhuang (1), Hosung Nam (2), Mark Hasegawa-Johnson (1), Louis M. Goldstein (2), Elliot Saltzman (2)

(1) University of Illinois at Urbana-Champaign, USA; (2) Haskins Laboratories, USA

We propose an instantaneous "gestural pattern vector" to encode the instantaneous pattern of gesture activations across tract variables in the gestural score. The design of these gestural pattern vectors is the first step towards an automatic speech recognizer motivated by articulatory phonology, which is expected to be more invariant to speech coarticulation and reduction than conventional speech recognizers built with the sequence-of-phones assumption. We use a tandem model to recover the instantaneous gestural pattern vectors from tract variable time functions in local time windows, and achieve classification accuracy up to 84.5% for synthesized data from one speaker. Recognizing all gestural pattern vectors is equivalent to recognizing the ensemble of gestures. This result suggests that the proposed gestural pattern vector might be a viable unit in statistical models for speech recognition.

Full Paper

Bibliographic reference.  Zhuang, Xiaodan / Nam, Hosung / Hasegawa-Johnson, Mark / Goldstein, Louis M. / Saltzman, Elliot (2008): "The entropy of the articulatory phonological code: recognizing gestures from tract variables", In INTERSPEECH-2008, 1489-1492.