FAAVSP - The 1st Joint Conference on Facial Analysis, Animation, and
Auditory-Visual Speech Processing

Vienna, Austria
September 11-13, 2015

Integration of Auditory, Labial and Manual Signals in Cued Speech Perception by Deaf Adults: An Adaptation of the McGurk Paradigm

Clémence Bayard (1), Cécile Colin (2), Jacqueline Leybart (2)

(1) CNRS et Université Grenoble Alpes, GIPSA lab, Grenoble, France
(2) Center for Research in Cognition and Neuroscience (CRNC), Université Libre de Bruxelles (ULB), Belgium

Among deaf individuals fitted with a cochlear implant, some use Cued Speech (CS; a system in which each syllable is uttered with a complementary manual gesture) and therefore, have to combine auditory, labial and manual information to perceive speech. We examined how audio-visual (AV) speech integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely depending on labial ambiguity. To address this issue, deaf CS users (N=36) and deaf CS naïve (N=35) participants were submitted to an identification task of two AV McGurk stimuli (either with a plosive or with a fricative consonant). Manual cues were congruent with either auditory information, lip information or the expected fusion. Results revealed that deaf individuals can merge audio and labial information into a single unified percept. Without manual cues, participants gave a high proportion of fusion response (particularly with ambiguous plosive McGurk stimuli). Results also suggested that manual cues can modify the AV integration and that their impact differs between plosive and fricative McGurk stimuli.

Full Paper

Bibliographic reference.  Bayard, Clémence / Colin, Cécile / Leybart, Jacqueline (2015): "Integration of auditory, labial and manual signals in cued speech perception by deaf adults: an adaptation of the McGurk paradigm", In FAAVSP-2015, 163-168.