ISCA Archive AVSP 2003
ISCA Archive AVSP 2003

Triphone-based coarticulation model

Elisabetta Bevacqua, Cathérine Palachaud

Our model of lip movements is based on real data (symmetric triphone 'VCV') from a speaker on which was applied passive markers. Target positions of vowels and consonants have been extracted from the data. Coarticulation is simulated by modifying the target points associated to consonants depending on the vocalic context using a logistic function. Coarticulation rules are then applied to each facial parameter to simulate muscular tension. Our model of lip movements is applied on a 3D facial model compliant with MPEG-4 standard.


Cite as: Bevacqua, E., Palachaud, C. (2003) Triphone-based coarticulation model. Proc. Auditory-Visual Speech Processing, 221-226

@inproceedings{bevacqua03_avsp,
  author={Elisabetta Bevacqua and Cathérine Palachaud},
  title={{Triphone-based coarticulation model}},
  year=2003,
  booktitle={Proc. Auditory-Visual Speech Processing},
  pages={221--226}
}