AVSP 2003 - International Conference on Audio-Visual Speech Processing
September 4-7, 2003
Our model of lip movements is based on real data (symmetric triphone 'VCV') from a speaker on which was applied passive markers. Target positions of vowels and consonants have been extracted from the data. Coarticulation is simulated by modifying the target points associated to consonants depending on the vocalic context using a logistic function. Coarticulation rules are then applied to each facial parameter to simulate muscular tension. Our model of lip movements is applied on a 3D facial model compliant with MPEG-4 standard.
Bibliographic reference. Bevacqua, Elisabetta / Palachaud, Cathérine (2003): "Triphone-based coarticulation model", In AVSP 2003, 221-226.