Auditory-Visual Speech Processing (AVSP) 2013

Annecy, France
August 29 - September 1, 2013

Audiovisual speech integration: Modulatory factors and the link to sound symbolism

Charles Spence

Crossmodal Research Laboratory, Oxford University, UK

In this talk, I will review some of the latest findings from the burgeoning literature on the audiovisual integration of speech stimuli. I will focus on those factors that have been demonstrated to influence this form of multisensory integration (such as temporal coincidence, speaker/gender matching, and attention; Vatakis & Spence, 2007, 2010). I will also look at a few of the other factors that appear not to matter as much as some researchers have argued that they should, such as spatial coincidence (except under a particular subset of circumstances; Spence 2013). I will also look at the emerging literature that has documented the widespread existence of audiovisual crossmodal correspondences, and consider their putative link to the literature on embodied cognition (Spence & Deroy, 2013).

References

  1. Spence, C. (2013). Just how important is spatial coincidence to multisensory integration? Evaluating the spatial rule. Annals of the New York Academy of Sciences.
  2. Spence, C., & Deroy, O. (2012). Hearing mouth shapes: Sound symbolism and the reverse McGurk effect. i- Perception, 3, 550-556.
  3. Vatakis, A., & Spence, C. (2007). Crossmodal binding: Evaluating the “unity assumption” using audiovisual speech stimuli. Perception & Psychophysics, 69, 744-756.
  4. Vatakis, A., & Spence, C. (2010). Audiovisual temporal integration for complex speech, object-action, animal call, and musical stimuli. In M. J. Naumer & J. Kaiser (Eds.), Multisensory object perception in the primate brain (pp. 95-121). New York: Springer.

Full Paper

Bibliographic reference.  Spence, Charles (2013): "Audiovisual speech integration: modulatory factors and the link to sound symbolism", In AVSP-2013, 3.