In this paper, we present some novel methods and applications for audio and video signal processing for a multimodal environment of an assisted living smart space. This intelligent environment was developed during the 7th Summer Workshop on Multimodal Interfaces eNTERFACE. It integrates automatic systems for audio and video-based monitoring and user tracking in the smart space. In the assisted living environment, users are tracked by some omnidirectional video cameras, as well as speech and non-speech audio events are recognized by an array of microphones. The multiple objects tracking precision (MOTP) of the developed video monitoring system was 0.78 and 0.73 and the multiple objects tracking accuracy (MOTA) was 62.81% and 72.31% for single person and three people scenarios, respectively. The recognition accuracy of the proposed multilingual speech and audio events recognition system was 96.5% and 93.8% for user's speech commands and non-speech acoustic events, correspondingly. The design of the assisted living environment, the certain test scenarios and the process of audio-visual database collection are described in the paper.
Bibliographic reference. Karpov, Alexey / Akarun, Lale / Yalçın, Hülya / Ronzhin, Alexander / Demiröz, Barış Evrim / Çoban, Aysun / Železný, Miloš (2014): "Audio-visual signal processing in a multimodal assisted living environment", In INTERSPEECH-2014, 1023-1027.