Auditory-Visual Speech Processing 2007 (AVSP2007)

Kasteel Groenendaal, Hilvarenbeek, The Netherlands
August 31 - September 3, 2007

Exploring Semantic Cueing Effects using McGurk Fusion

Azra Nahid Ali

School of Computing and Engineering, University of Huddersfield, England, UK

In this paper highlights that audiovisual speech perception is not all autonomous and that lexical semantic context and word meaning can influence McGurk fusion and the rate of fusion responses too. Sentence context can actually diminish McGurk fusion rate when semantic cueing from the context biases against the fusion expected. However, it did not block the fusion entirely even when the cue strongly favoured the audio or visual channel. Probabilistic grammars were used to measure the strength and directionality of speech cueing effects, using conditional probabilities estimated from relative frequencies in the British National Corpus. Results from conditional probabilities showed that the greater the strength of the semantic bias the greater the reduction in fusion rate. Whereas, positive semantic cueing examples of Sharma (1989) showing an increase in fusion rates when the phrase favoured the expected fusion. Audiovisual speech perception can be influenced by the semantic context which adds to evidence from other types of experiment that speech perception is not all bottom-up processing.

Full Paper

Bibliographic reference.  Ali, Azra Nahid (2007): "Exploring semantic cueing effects using Mcgurk fusion", In AVSP-2007, paper P03.