Empirical findings show that speech perception is not only based on auditory input but also on visual modality (lip movement). One prominent phenomenon to demonstrate the power of intermodal information processing is the "McGurk Effect", which occurs if incongruent auditory and visual information are given. Participants often report hearing syllables, which were presented neither auditory nor visually. A computer based version of this test was created. Contrary to the results of the studies with English-speaking participants, the German subjects showed almost no "fusions" (e.g. auditory "ABA" and visually "AGA" results in "ADA") were found. In a series of three consecutive experiments, diminishing signal to noise ratio decreased the rate of correct identifications of acoustic as well as visual information. Whereas increasing the noise level led to increasing combinations, the fusion rate was nearly unchanged. We are currently comparing using this task to compare schizophrenic patients to controls.
Cite as: Kabisch, B., Nisch, C., Straube, E.R., Campbell, R. (2001) Development of a completely computerized McGurk design under variation of the signal to noise ratio. Proc. Auditory-Visual Speech Processing, 199
@inproceedings{kabisch01_avsp, author={Bjorn Kabisch and Carol Nisch and Eckart R. Straube and Ruth Campbell}, title={{Development of a completely computerized McGurk design under variation of the signal to noise ratio}}, year=2001, booktitle={Proc. Auditory-Visual Speech Processing}, pages={199} }