Auditory-Visual Speech Processing (AVSP) 2011

Volterra, Italy
September 1-2, 2011

Dimensional Mapping of Multimodal Integration on Audiovisual Emotion Perception

Yoshiko Arimoto (1,2), Kazuo Okanoya (1,2,3)

(1) Emotional Information Project, JST,ERATO, Japan
(2) Brain Science Institute, RIKEN, Japan
(3) Graduate School and Arts and Science, the University of Tokyo, Japan

The aim of this research was to investigate what emotions are perceived from incongruent vocal and facial emotional expressions as an integrated emotional expression. Our approach is unimodal and bimodal perceptual emotional information mapping to dimensions of emotional space created with principal component analysis (PCA). Unimodal perception tests and a bimodal congruent/incongruent perception test were conducted with each stimuli in which professional actors expressed four emotions?anger, joy, fear, and sadness?and observers rated the intensity of six emotions (the four expressed emotions plus disgust and surprise) on a six-point scale. A PCA was performed with the scores of each stimulus to create a perceptual emotional space and to compare the difference between unimodal perception and bimodal perception.

The results showed that some incongruent emotional expressions were significantly perceived as inconsistent emotions with expressed emotion.

Index Terms. emotional speech, facial expression, emotion perception, multimodal integration, principal component analysis

Full Paper

Bibliographic reference.  Arimoto, Yoshiko / Okanoya, Kazuo (2011): "Dimensional mapping of multimodal integration on audiovisual emotion perception", In AVSP-2011, 93-98.