Auditory-Visual Speech Processing
(AVSP 2001)

September 7-9, 2001
Aalborg, Denmark

Hidden Markov Models for Visual Speech Synthesis with Limited Data

Allan Arb (1), Steven Gustafson (2), Timothy Anderson (3), Raymond Slyh (3)

(1) Air Force Research Laboratory, Kirtland AFB, NM 87117, USA
(2) Air Force Institute of Technology, Wright-Patterson AFB, OH, USA
(3) Air Force Research Laboratory, Wright-Patterson AFB, OH, USA - 1233

This paper addresses a problem often encountered when estimating control points used in visual speech synthesis. First, Hidden Markov Models (HMMs) are estimated for each viseme present in stored video data. Second, models are generated for each triseme (a viseme in context with the previous and following visemes) in the training set. Next, a decision tree is used to cluster and relate states in the HMMs that are similar in a contextual and statistical sense. The tree is also used to estimate HMMs for any trisemes that are not present in the stored video data when control points for such trisemes are required for synthesizing the lip motion for a sentence. Finally, the HMMs are used to generate sequences of visual speech control points for those trisemes not occurring in the stored data. Comparisons of mouth shapes generated from the artificially generated control points and the control points estimated from video not used to train the HMMs indicate that the process estimated accurate control points for the trisemes tested. This paper thus establishes a useful method for synthesizing realistic audio-synchronized video facial features.


Full Paper

Bibliographic reference.  Arb, Allan / Gustafson, Steven / Anderson, Timothy / Slyh, Raymond (2001): "Hidden Markov models for visual speech synthesis with limited data", In AVSP-2001, 84-89.