Auditory-Visual Speech Processing 2007 (AVSP2007)
Kasteel Groenendaal, Hilvarenbeek, The Netherlands
Audio-visual speech source separation consists in mixing visual speech processing techniques (e.g. lip parameters tracking) with source separation methods to improve and/or simplify the extraction of a speech signal from a mixture of acoustic signals. In this paper, we present a new approach to this problem: visual information is used here as a voice activity detector (VAD). Results show that, in the difficult case of realistic convolutive mixtures, the classic problem of the permutation of the output frequency channels can be solved using the visual information with a simpler processing than when using only audio information.
Bibliographic reference. Rivet, Bertrand / Girin, Laurent / Servière, Christine / Pham, Dinh-Tuan / Jutten, Christian (2007): "Audiovisual speech source separation: a regularization method based on visual voice activity detection", In AVSP-2007, paper P07.