AVSP 2003 - International Conference on Audio-Visual Speech Processing

September 4-7, 2003
St. Jorioz, France

Audiovisual Asynchrony Detection for Speech and Nonspeech Signals

Brianna L. Conrey, David B. Pisoni

Department of Psychology, Indiana University, Bloomington, Indiana, USA

This study investigated the "intersensory temporal synchrony window" [1] for audiovisual (AV) signals. A speeded asynchrony detection task was used to measure each participantís temporal synchrony window for speech and nonspeech signals over an 800-ms range of AV asynchronies. Across three sets of stimuli, the video-leading threshold for asynchrony detection was larger than the audio-leading threshold, replicating previous findings reported in the literature. Although the audio-leading threshold did not differ for any of the stimulus sets, the video-leading threshold was significantly larger for the point-light display (PLD) condition than for either the full-face (FF) or nonspeech (NS) conditions. In addition, a small but reliable phonotactic effect of visual intelligibility was found for the FF condition. High visual intelligibility words produced larger video-leading thresholds than low visual intelligibility words. Relationships with recent neurophysiological data on multisensory enhancement and convergence are discussed.

Reference

  1. Lewkowicz, D.J., Perception of auditory-visual temporal synchrony in human infants. Journal of Experimental Psychology: Human Perception and Performance, 1996. 22: p. 1094-1106.


Full Paper

Bibliographic reference.  Conrey, Brianna L. / Pisoni, David B. (2003): "Audiovisual asynchrony detection for speech and nonspeech signals", In AVSP 2003, 25-30.