AVSP 2003 - International Conference on Audio-Visual Speech Processing

September 4-7, 2003
St. Jorioz, France

Triphone-Based Coarticulation Model

Elisabetta Bevacqua (1), Cathérine Palachaud (2)

(1) Dept. of Computer and System Science, Univ. of Rome, Italy
(2) LINC - Paragraphe, IUT of Montreuil, Univ. of Paris 8, France

Our model of lip movements is based on real data (symmetric triphone 'VCV') from a speaker on which was applied passive markers. Target positions of vowels and consonants have been extracted from the data. Coarticulation is simulated by modifying the target points associated to consonants depending on the vocalic context using a logistic function. Coarticulation rules are then applied to each facial parameter to simulate muscular tension. Our model of lip movements is applied on a 3D facial model compliant with MPEG-4 standard.


Full Paper

Bibliographic reference.  Bevacqua, Elisabetta / Palachaud, Cathérine (2003): "Triphone-based coarticulation model", In AVSP 2003, 221-226.