INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

HMM-Based Text-to-Articulatory-Movement Prediction and Analysis of Critical Articulators

Zhen-Hua Ling (1), Korin Richmond (2), Junichi Yamagishi (2)

(1) University of Science & Technology of China, China
(2) University of Edinburgh, UK

In this paper we present a method to predict the movement of a speaker's mouth from text input using hidden Markov models (HMM). We have used a corpus of human articulatory movements, recorded by electromagnetic articulography (EMA), to train HMMs. To predict articulatory movements from text, a suitable model sequence is selected and the maximum-likelihood parameter generation (MLPG) algorithm is used to generate output articulatory trajectories. In our experiments, we find that fully context-dependent models outperform monophone and quinphone models, achieving an average root mean square (RMS) error of 1.945mm when state durations are predicted from text, and 0.872mm when natural state durations are used. Finally, we go on to analyze the prediction error for different EMA dimensions and phone types. We find a clear pattern emerges that the movements of so-called critical articulators can be predicted more accurately than the average performance.

Full Paper

Bibliographic reference.  Ling, Zhen-Hua / Richmond, Korin / Yamagishi, Junichi (2010): "HMM-based text-to-articulatory-movement prediction and analysis of critical articulators", In INTERSPEECH-2010, 2194-2197.