2001: A Speaker Odyssey - The Speaker Recognition Workshop

June 18-22, 2001
Crete, Greece

Using Lip Features for Multimodal Speaker Verification

Xiaozheng Zhang (1), Charles C. Broun (2)

(1) Georgia Institute of Technology, Atlanta, Georgia, USA
(2) Motorola Human Interface Lab, Tempe, Arizona, USA

With the prevalence of the information age, privacy and personalization are forefront in today's society. As such, biometrics is viewed as an essential component of current and evolving technological systems. Consumers demand unobtrusive and non-invasive approaches. In our previous work, we have demonstrated a speaker verification system that meets these criteria. However, there are additional constraints for fielded systems. The required recognition transactions are often performed in adverse environments and across diverse populations, necessitating robust solutions.

We propose a multimodal approach that builds on our current state-of-the-art speaker verification technology. In order to maintain the transparent nature of the speech interface, we focus on optical sensing technology to provide the additional modality - giving us an audio-visual person recognition system. For the audio domain, we use our existing speaker verification system. For the visual domain, we focus on lip motion.


Full Paper   Presentation

Bibliographic reference.  Zhang, Xiaozheng / Broun, Charles C. (2001): "Using lip features for multimodal speaker verification", In ODYSSEY-2001, 231-236.