Interspeech'2005 - Eurospeech
Recently, there has been a significant amount of work on the recognition of emotions from speech and biosignals. Most approaches to emotion recognition so far concentrate on a single modality and do not take advantage of the fact that an integrated multimodal analysis may help to resolve ambiguities and compensate for errors. In this paper, we describe various methods for fusing physiological and voice data at the feature-level and the decision-level as well as a hybrid integration scheme. The results of the integrated recognition approach are then compared with the individual recognition results from each modality.
Bibliographic reference. Kim, Jonghwa / André, Elisabeth / Rehm, Matthias / Vogt, Thurid / Wagner, Johannes (2005): "Integrating information from speech and physiological signals to achieve emotional sensitivity", In INTERSPEECH-2005, 809-812.