Auditory-Visual Speech Processing (AVSP) 2009
University of East Anglia, Norwich, UK
We give an overview of SynFace, a speech-driven face animation system originally developed for the needs of hard-of-hearing users of the telephone. For the 2009 LIPS challenge, SynFace includes not only articulatory motion but also non-verbal motion of gaze, eyebrows and head, triggered by detection of acoustic correlates of prominence and cues for interaction control. In perceptual evaluations, both verbal and non-verbal movmements have been found to have positive impact on word recognition scores.
Bibliographic reference. Beskow, Jonas / Salvi, Giampiero / Al Moubayed, Samer (2009): "Synface - verbal and non-verbal face animation from audio", In AVSP-2009, 169.