ISCA Archive AVSP 2001
ISCA Archive AVSP 2001

Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]?

M.A. Cathiard, Jean-Luc Schwartz, C. Abry

Why does audio [b] give more [d] percepts with visual [g] than with visual [d], as in the present classical McGurk experiment? An explanation given for this asymmetry could be a language bias towards [d]. Contrary to what is sometimes taken for granted in the lipreading literature, visual [g] does not give more [d] than [g] responses. In fact [d] and [g] are neither visemically, nor auditorily equivalent. They are fully distinguishable in audio as well as in vision, where 80% correct identifications are current in laboratory conditions, as in the present experiment. We show here that in spite of these highly differenciating scores, FLMP modelling can quite surprisingly account for such an asymmetry by tuning very small remaining values; which is highly unsatisfactory. We suggest another explanation for this asymmetry which could be grounded on brain mechanisms dedicated to the computation of auditory and visual mouth opening movements, i.e. audio movement and visual velocity detectors and integrators, dedicated to the bimodal integration of place targets in speech.


Cite as: Cathiard, M.A., Schwartz, J.-L., Abry, C. (2001) Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]? Proc. Auditory-Visual Speech Processing, 138-142

@inproceedings{cathiard01_avsp,
  author={M.A. Cathiard and Jean-Luc Schwartz and C. Abry},
  title={{Asking a naive question about the McGurk effect: Why does audio [b] give more [d] percepts with visual [g] than with visual [d]?}},
  year=2001,
  booktitle={Proc. Auditory-Visual Speech Processing},
  pages={138--142}
}