AVSP 2003 - International Conference on Audio-Visual Speech Processing
September 4-7, 2003
The authors present two visual articulation models for speech synthesis and methods to obtain them from measured data. The visual articulation models are used to control visible articulator movements described by six motion parameters: one for the up-down movement of the lower jaw, three for the lips and two for the tongue (see section 2.1 for details). To obtain the data, a female speaker was measured with the 2Darticulograph AG100  and simultaneously filmed. The first visual articulation model is a hybrid data and rule based model that selects and combines most similar viseme patterns (section 2.3.). It is retrieved more or less directly from the measurements. The second model (section 2.4.) is rule based, following the dominance principal suggested by Löfqvist . The parameter values for the second model are derived from the first one. Both models are integrated into MASSY, the Modular Audiovisual Speech SYnthesizer .
Bibliographic reference. Fagel, Sascha / Clemens, Caroline (2003): "Two articulation models for audiovisual speech synthesis - description and determination", In AVSP 2003, 215-220.