Auditory-Visual Speech Processing (AVSP'98)
December 4-6, 1998
Emotional states are communicated by seeing the face as well as by listening to affective prosody. The two inputs can also be present at the same time and are then processed concurrently. The present experiment examines the role of the voice on the upper versus lower halves of a face. Previous research using an angry-fear facial expression continuum showed that recognition of the lower half of a face was nearly at chance level. Our experiment asked whether in these circumstances the impact of the voice would be the same for both face halves. The results showed that the cross-modal effect of the voice was the same for the two face conditions.
Bibliographic reference. Gelder, Beatrice de / Vroomen, Jean / Bertelson, Paul (1998): "Cross-modal bias of voice tone on facial expression: upper versus lower halves of a face", In AVSP-1998, 93-96.