Auditory-Visual Speech Processing 2007 (AVSP2007)
Kasteel Groenendaal, Hilvarenbeek, The Netherlands
This paper presents new steps toward animation of precise articulation. The acquisition of audio-visual corpus for Czech and new method for parameterization of visual speech was designed to obtain exact speech data. The parameterization method is primarily suitable for training a data driven visual speech synthesis systems. The audio-visual corpus includes also specially designed test part. Furthermore, the paper presents the collection of suitable text material for test of visual speech perception and also the procedure how can be the test performed. The synthesis method based on the selection of visual unit and animation model of talking head is extended. The synthesis system is objectively and subjectively evaluated.
Bibliographic reference. Krnoul, Zdenek / Zelezný, Milos (2007): "Innovations in Czech audio-visual speech synthesis for precise articulation", In AVSP-2007, paper P30.