INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Active Appearance Models for Photorealistic Visual Speech Synthesis

Wesley Mattheyses, Lukas Latacz, Werner Verhelst

Vrije Universiteit Brussel, Belgium

The perceived quality of a synthetic visual speech signal greatly depends on the smoothness of the presented visual articulators. This paper explains how concatenative visual speech synthesis systems can apply active appearance models to achieve a smooth and natural visual output speech. By modeling the visual speech contained in the system's speech database, a diversification between the synthesis of the shape and the texture of the talking head is feasible. This allows the system to accurately balance between the articulation strength of the visual articulators and the signal smoothness of the visual mode in order to optimize the synthesis. To improve the synthesis quality, an automatic database normalization strategy has been designed that removes variations from the database which are not related to speech production. As was verified by a perception experiment, this normalization strategy significantly improves the perceived signal quality.

Full Paper

Bibliographic reference.  Mattheyses, Wesley / Latacz, Lukas / Verhelst, Werner (2010): "Active appearance models for photorealistic visual speech synthesis", In INTERSPEECH-2010, 1113-1116.