Auditory-Visual Speech Processing (AVSP) 2009

University of East Anglia, Norwich, UK
September 10-13, 2009

Older and Younger Adults Use Fewer Neural Resources During Audiovisual than During Auditory Speech Perception

Axel H. Winneke, Natalie A. Phillips

Department of Psychology/ Centre for Research in Human Development, Concordia University, Montreal, Canada

This study looks at age-related differences in the brain processes involved in audiovisual (AV) speech perception in multi-talker background babble. The behavioural findings clearly show that both younger adults (YA) and older adults (OA) benefited equally well from AV speech relative to auditory-only (A) speech. Results pertaining to a condition that presented only a photograph alongside spoken words (AVphoto) supports the notion that an AV speech benefit cannot be achieved without the availability of dynamic visual speech cues provided by the lips. Interestingly, OA performed more poorly than YA in speechreading but, in line with the inverse effectiveness hypothesis, OA showed larger auditory enhancement effects suggesting that OA benefit more from AV speech. Analyses of the auditory N1 event-related potential (ERP) showed that AV speech trials lead to an amplitude reduction relative to A-only trials. This reduction was similar in YA and OA. In addition to the amplitude reduction, in both age groups the N1 related to AV speech trials peaked earlier, but this latency shift was larger for OA indicating that OA benefit more from AV speech than YA. These findings suggest that AV speech processing is more efficient because fewer neural resources are required to achieve superior performance. This idea of efficiency is further discussed with implications to higher-level cognition and successful aging.

Index Terms: audiovisual speech, event-related potentials, aging, background noise

Full Paper

Bibliographic reference.  Winneke, Axel H. / Phillips, Natalie A. (2009): "Older and younger adults use fewer neural resources during audiovisual than during auditory speech perception", In AVSP-2009, 123-126.