An audio-visual speech synthesis system with modeling of asynchrony between auditory and visual speech modalities is proposed in the paper. Corpus-based study of real recordings gave us the required data for understanding the problem of modalities asynchrony that is partially caused by the co-articulation phenomena. A set of context-dependent timing rules and recommendations was elaborated in order to make a synchronization of auditory and visual speech cues of the animated talking head similar to a natural humanlike way. The cognitive evaluation of the model-based talking head for Russian with implementation of the original asynchrony model has shown high intelligibility and naturalness of audio-visual synthesized speech.
Bibliographic reference. Karpov, Alexey / Tsirulnik, Liliya / Krňoul, Zdeněk / Ronzhin, Andrey / Lobanov, Boris / Železný, Miloš (2009): "Audio-visual speech asynchrony modeling in a talking head", In INTERSPEECH-2009, 2911-2914.