Effective human-robot communication is one of the main concerns in modern robotics. Involved systems should be very robust, allowing little chance for misunderstanding users commands. The main purpose of this work is to develop a general framework for multimodal human-robot communication, which allows users to interact with robots using speech and gestures, integrated into unique commands. The produced architecture relies on the definition of different modules separately analysing the low level inputs and presenting a further fusion module able to extract semantics from these multiple channels. In this paper, we introduce our general approach and provide a case study where gesture and speech modalities are combined.
Bibliographic reference. Cutugno, Francesco / Finzi, Alberto / Fiore, Michelangelo / Leone, Enrico / Rossi, Silvia (2013): "Interacting with robots via speech and gestures, an integrated architecture", In INTERSPEECH-2013, 3727-3731.