Auditory-Visual Speech Processing (AVSP) 2011

Volterra, Italy
September 1-2, 2011

Visual Speech Influences Speeded Auditory Identification

Tim Paris, Jeesun Kim, Chris Davis

MARCS Auditory Laboratories, University of Western Sydney, Milperra, Australia

Auditory speech perception is faster and more accurate when combined with visual speech. We attempted to replicate previous findings that suggested visual speech facilitates auditory processing when speech is paired with matching video and interferes with processing when paired with mismatched videos. Crucially we employed button presses instead of a vocal response to determine if previous results could be attributed to the specific nature of the task. Stimuli consisted of the sounds 'apa’, 'aka' and 'ata', with matched and mismatched videos that showed the talker’s whole face or upper face (control). The percentage of matched AV videos was set at 85% in the congruent condition and 15% in the incongruent condition. The results show that speeded identification decisions influence auditory processing. Furthermore, this influence is moderated by (a) visual speech acting as a temporal cue to the acoustic signal and (b) resolving the perceived differences between visual and auditory modalities. The current study builds on previous results suggesting visual speech plays a role in the termporal processing of auditory speech

Full Paper

Bibliographic reference.  Paris, Tim / Kim, Jeesun / Davis, Chris (2011): "Visual speech influences speeded auditory identification", In AVSP-2011, 5-8.