Auditory-Visual Speech Processing (AVSP) 2009

University of East Anglia, Norwich, UK
September 10-13, 2009

Voice Activity Detection based on Fusion of Audio and Visual Information

Shin’ichi Takeuchi (1), Takashi Hashiba (2), Satoshi Tamura (3), Satoru Hayamizu (3)

(1) Virtual System Laboratory, Gifu University, Japan
(2) Graduate School of Engineering, Gifu University, Japan
(3) Faculty of Engineering, Gifu University, Japan

In this paper, we propose a multi-modal voice activity detection system (VAD) that uses audio and visual information. Audioonly VAD systems typically are not robust to (acoustic) noise. Incorporating visual information, for example information extracted from mouth images, can improve the robustness since the visual information is not affected by the acoustic noise. In multi-modal (speech) signal processing, there are two methods for fusing the audio and the visual information: concatenating the audio and visual features, and employing audio-only and visual-only classifiers, then fusing the unimodal decisions. We investigate the effectiveness of these methods and also compare model-based and model-free methods for VAD. Experimental results show feature fusion methods to generally be more effective, and decision fusion methods generally perform better using model-free methods.

Index Terms: voice activity detection, multi-modal, AVVAD

Full Paper

Bibliographic reference.  Takeuchi, Shin’ichi / Hashiba, Takashi / Tamura, Satoshi / Hayamizu, Satoru (2009): "Voice activity detection based on fusion of audio and visual information", In AVSP-2009, 151-154.