In the experiments described in this paper, [aba] and [aga] speech sequences are combined in such a way that an [ada] stimuli is obtained, referring to the well-known McGurk effect. But contrary to the standard experiments, where audio [aba] and visual [aga] stimuli are combined, only audio signals are considered here, resulting in what is called a pure audio McGurk effect. The processing consists in modelling the audio signals by the classical linear prediction model and then linearly combining the [aba] and [aga] sequence LP filters that model the contribution of the vocal tract before resynthesis. The relation of the experiments with the problem of data representation and the nature of the audio-visual integration space is discussed.
Cite as: Girin, L. (2003) Pure audio McGurk effect. Proc. Auditory-Visual Speech Processing, 139-144
@inproceedings{girin03_avsp, author={Laurent Girin}, title={{Pure audio McGurk effect}}, year=2003, booktitle={Proc. Auditory-Visual Speech Processing}, pages={139--144} }