INTERSPEECH 2008
9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

Automatic Lip Synchronization by Speech Signal Analysis

Goranka Zoric, Aleksandra Cerekovic, Igor S. Pandzic

University of Zagreb, Croatia

In this paper a system for the automatic lip synchronization of virtual 3D human based only on the speech input is described. The speech signal is classified into viseme classes using neural networks. Visual representation of phonemes, visemes, defined in MPEG-4 FA, is used for face synthesis.

Full Paper

Bibliographic reference.  Zoric, Goranka / Cerekovic, Aleksandra / Pandzic, Igor S. (2008): "Automatic lip synchronization by speech signal analysis", In INTERSPEECH-2008, 2323.