ISCA Archive Interspeech 2009
ISCA Archive Interspeech 2009

Audio-visual speech asynchrony modeling in a talking head

Alexey Karpov, Liliya Tsirulnik, Zdeněk Krňoul, Andrey Ronzhin, Boris Lobanov, Miloš Železný

An audio-visual speech synthesis system with modeling of asynchrony between auditory and visual speech modalities is proposed in the paper. Corpus-based study of real recordings gave us the required data for understanding the problem of modalities asynchrony that is partially caused by the co-articulation phenomena. A set of context-dependent timing rules and recommendations was elaborated in order to make a synchronization of auditory and visual speech cues of the animated talking head similar to a natural humanlike way. The cognitive evaluation of the model-based talking head for Russian with implementation of the original asynchrony model has shown high intelligibility and naturalness of audio-visual synthesized speech.

doi: 10.21437/Interspeech.2009-737

Cite as: Karpov, A., Tsirulnik, L., Krňoul, Z., Ronzhin, A., Lobanov, B., Železný, M. (2009) Audio-visual speech asynchrony modeling in a talking head. Proc. Interspeech 2009, 2911-2914, doi: 10.21437/Interspeech.2009-737

  author={Alexey Karpov and Liliya Tsirulnik and Zdeněk Krňoul and Andrey Ronzhin and Boris Lobanov and Miloš Železný},
  title={{Audio-visual speech asynchrony modeling in a talking head}},
  booktitle={Proc. Interspeech 2009},