Sixth European Conference on Speech Communication and Technology
(EUROSPEECH'99)

Budapest, Hungary
September 5-9, 1999

Speaker Adaptation for Audio-Visual Speech Recognition

Gerasimos Potamianos, Alexandros Potamianos

AT&T Labs - Research, Florham Park, NJ, USA

In this paper, speaker adaptation is investigated for audiovisual automatic speech recognition (ASR) using the multistream hidden Markov model (HMM). First, audio-only and visual-only HMM parameters are adapted by combining maximum a posteriori and maximum likelihood linear regression adaptation. Subsequently, the audio-visual HMM stream exponents are adapted to better capture the reliability of each modality for the specific speaker, by means of discriminative training. Various visual feature sets are compared, and features based on linear discriminant analysis are demonstrated to result in superior multispeaker and speaker-adapted recognition performance. In addition, visual feature mean normalization is shown to significantly improve visual-only and audio-visual ASR performance. Adaptation experiments on a 49-subject database are reported. On average, a 28% relative word error reduction is achieved by adapting the multi-speaker audiovisual HMM to each subject in the database.


Full Paper (PDF)   Gnu-Zipped Postscript

Bibliographic reference.  Potamianos, Gerasimos / Potamianos, Alexandros (1999): "Speaker adaptation for audio-visual speech recognition", In EUROSPEECH'99, 1291-1294.