This paper presents the virtual speech cuer built in the context of the ARTUS project aiming at watermarking hand and face gestures of a virtual animated agent in a broadcasted audiovisual sequence. For deaf televiewers that master cued speech, the animated agent can be then superimposed - on demand and at the reception - on the original broadcast as an alternative to subtitling. The paper presents the multimodal text-to-speech synthesis system and the first evaluation performed by deaf users.
Cite as: Gibert, G., Bailly, G., Elisei, F. (2006) Evaluating a virtual speech cuer. Proc. Interspeech 2006, paper 1539-Thu2A3O.1, doi: 10.21437/Interspeech.2006-609
@inproceedings{gibert06_interspeech, author={G. Gibert and Gérard Bailly and F. Elisei}, title={{Evaluating a virtual speech cuer}}, year=2006, booktitle={Proc. Interspeech 2006}, pages={paper 1539-Thu2A3O.1}, doi={10.21437/Interspeech.2006-609} }