Auditory-Visual Speech Processing 2007 (AVSP2007)
Kasteel Groenendaal, Hilvarenbeek, The Netherlands
This study examines the hypothesis that audio-visual integration of speech requires both expectation to perceive speech and sufficient attentional resources to allow multimodal integration. Audio-visual integration was measured by recording susceptibility to the McGurk effect whilst participants simultaneously performed a primary visual task under conditions of high or low perceptual load. According to the ‘perceptual load hypothesis’ (Lavie & Tsal 1994) distracter stimuli (in this case the moving lips) are only processed if the detection of target stimuli in the primary visual task does not exceed attentional capacity limits. If this hypothesis is accurate then, under conditions of high perceptual load, multimodal integration during speech perception will only occur if this integration is a pre-attentive process. In addition, instead of natural syllables, half the participants were played sine wave speech (SWS) tokens, which may be heard as non-speech sounds, such as beeps and whistles, or as speech sounds, depending on the listener’s expectations. Tuomainen et al (2005) demonstrated that audio-visual integration using SWS only occurs when participants are expecting auditory speech stimuli. We anticipated that auditory and visual stimuli will be integrated when participants are in speech mode and only under these conditions will perceptual load influence the degree of integration. When these same stimuli are not expected to be speech no integration will occur and perceptual load will not have any influence over the level of integration. The results will be discussed.
Bibliographic reference. Knowland, Vicky / Tuomainen, Jyrki / Rosen, Stuart (2007): "The effects of perceptual load and set on audio-visual speech integration", In AVSP-2007, paper L3-3.