Auditory-Visual Speech Processing (AVSP) 2009
University of East Anglia, Norwich, UK
This paper describes the 3D talking head text-driven system controlled by the SAT (Selection of Articulatory Targets) method developed at the University of West Bohemia (UWB) that will be used for participation in the LIPS 2009 challenge. It gives an overview of methods used for visual speech animation, parameterization of a human face and a tongue, and a synthesis method. A 3D animation model is used for a pseudo-muscular animation schema to create visual speech animation usable for lipreading.
Index Terms: facial animation, audio-visual speech synthesis, audio-to-visual mapping
Bibliographic reference. Krňoul, Zdeněk / Železný, Miloš (2009): "The UWB 3d talking head text-driven system controlled by the SAT method used for the LIPS 2009 challenge", In AVSP-2009, 167-168.