9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

A Trainable Trajectory Formation Model TD-HMM Parameterized for the LIPS 2008 Challenge

Gérard Bailly (1), Oxana Govokhina (1), Gaspard Breton (2), Frédéric Elisei (1), Christophe Savariaux (1)

(1) GIPSA, France; (2) Orange Labs, France

We describe here the trainable trajectory formation model that will be used for the LIPS'2008 challenge organized at InterSpeech'2008. It predicts articulatory trajectories of a talking face from phonetic input. It basically uses HMM-based synthesis but asynchrony between acoustic and gestural boundaries - taking for example into account non audible anticipatory gestures - is handled by a phasing model that predicts the delays between the acoustic boundaries of allophones to be synthesized and the gestural boundaries of HMM triphones. The HMM triphones and the phasing model are trained simultaneously using an iterative analysis-synthesis loop. Convergence is obtained within a few iterations. Using different motion capture data, we demonstrate here that the phasing model improves significantly the prediction error and captures subtle context-dependent anticipatory phenomena.

Full Paper

Bibliographic reference.  Bailly, Gérard / Govokhina, Oxana / Breton, Gaspard / Elisei, Frédéric / Savariaux, Christophe (2008): "A trainable trajectory formation model TD-HMM parameterized for the LIPS 2008 challenge", In INTERSPEECH-2008, 2318-2321.