Manual Cued Speech is an effective method used to enhance speech perception for hearing-impaired people. Thanks to this system, a speaker can clarify what has been said with the help of hand gestures. Seeing manual cues associated to lip shapes allows the cue receiver to identify speech elements unambiguously. A large amount of work has been devoted to Cued Speech effectiveness in visual identification, in the access to complete phonological representations and in language acquisition or reading and writing learning. No work aimed at investigating the temporal organization of Cued Speech production, i.e. the co-articulation of Cued Speech articulators. In this framework, the present paper presents an investigation of the temporal organization of hand cue presentation in relation to lip motion and the corresponding acoustic patterns in order to specify the nature of the syllabic structure of Cued Speech. Data reveal a clear advance of the hand on the sound and lip motion. Temporal coordination rules for French Cued Speech gestures are derived and an audiovisual synthesizer generating CV sequences in Cued Speech and based on these principles is presented.
Cite as: Attina, V., Beautemps, D., Cathiard, M.-A., Odisio, M. (2003) Toward an audiovisual synthesizer for Cued Speech: Rules for CV French syllables. Proc. Auditory-Visual Speech Processing, 227-232
@inproceedings{attina03_avsp, author={V. Attina and D. Beautemps and Marie-Agnès Cathiard and Matthias Odisio}, title={{Toward an audiovisual synthesizer for Cued Speech: Rules for CV French syllables}}, year=2003, booktitle={Proc. Auditory-Visual Speech Processing}, pages={227--232} }