Auditory-Visual Speech Processing (AVSP) 2009

University of East Anglia, Norwich, UK
September 10-13, 2009

Audio-Visual Mutual Dependency Models for Biometric Liveness Checks

Girija Chetty, Roland Göcke, Michael Wagner

National Center for Biometric Studies, Faculty of Information Sciences and Engineering University of Canberra, Australia

In this paper we propose liveness checking technique for multimodal biometric authentication systems based on audiovisual mutual dependency models. Liveness checking ensures that biometric cues are acquired from a live person who is actually present at the time of capture for authenticating the identity. The liveness check based on mutual dependency models is performed by fusion of acoustic and visual speech features which measure the degree of synchrony between the lips and the voice extracted from speaking face video sequences. Performance evaluation in terms of DET (Detector Error Tradeoff) curves and EERs(Equal Error Rates) on publicly available audiovisual speech databases show a significant improvement in performance of proposed fusion of face-voice features based on mutual dependency models.

Index Terms: multimodal, face-voice, speaker verification, ancillary speaker characteristics,

Full Paper

Bibliographic reference.  Chetty, Girija / Göcke, Roland / Wagner, Michael (2009): "Audio-visual mutual dependency models for biometric liveness checks", In AVSP-2009, 32-37.