The (digital) transmission of talking faces requires a high bandwidth that not every target channel is able to provide, even if powerful image compression algorithms are used. Therefore, a special face coding algorithm would be highly desirable. Unfortunately, development of such an algorithm has been hindered by the general problem of image motion estimation. In this paper we present a video-based system for face motion processing similar to the well-known voder-vocoder system for processing and coding acoustic speech signals. Like the vocoder, our 'face coder' consists of two independent parts: an analysis part for tracking non-rigid face motion, and a synthesis part for producing face animations. Results are shown for face motion tracking and the subsequent animation derived from either the raw motion data or the outcome of Principal Component Analysis. The automatic tracking results were evaluated by comparison with a set of manually tracked points.
Cite as: Kroos, C., Masuda, S., Kuratate, T., Vatikiotis-Bateson, E. (2001) Towards the facecoder: dynamic face synthesis based on image motion estimation in speech. Proc. Auditory-Visual Speech Processing, 24-29
@inproceedings{kroos01_avsp, author={Christian Kroos and Saeko Masuda and Takaaki Kuratate and Eric Vatikiotis-Bateson}, title={{Towards the facecoder: dynamic face synthesis based on image motion estimation in speech}}, year=2001, booktitle={Proc. Auditory-Visual Speech Processing}, pages={24--29} }