Spoken dialog systems (SDS) integrated into human-machine interaction interfaces is becoming a standard technology. Current stateof- the-art SDS, usually, is not able to provide for the user a natural way of communication. Existing automated dialog systems do not dedicate enough attention to problems in the interaction related to affected user behavior. As a result, Automatic Speech Recognition (ASR) engines are not able to recognize affected speech and dialog strategy does not make use of the userís emotional state. This paper addresses some aspects of processing affected speech within natural human-machine interaction. First of all, we propose an affected speech adapted ASR engine. Second, we describe our methods of emotion recognition within speech and present our results of emotion classification within Interspeech 2009 Emotion Challenge. Third, we test affected speech adapted speech recognition models and introduce an approach to achieve emotion adaptive dialog management in human-machine interaction.
Bibliographic reference. Vlasenko, Bogdan / Wendemuth, Andreas (2009): "Processing affected speech within human machine interaction", In INTERSPEECH-2009, 2039-2042.