INTERSPEECH 2007
8th Annual Conference of the International Speech Communication Association

Antwerp, Belgium
August 27-31, 2007

Fused HMM-Adaptation of Multi-Stream HMMs for Audio-Visual Speech Recognition

David Dean (1), Patrick Lucey (1), Sridha Sridharan (1), Tim Wark (2)

(1) Queensland University of Technology, Australia
(2) CSIRO, Australia

A technique known as fused hidden Markov models (FHMMs) was recently proposed as an alternative multi-stream modelling technique for audio-visual

speaker recognition. In this paper we show that for audio-visual speech recognition (AVSR), FHMMs can be adopted as a novel method of training synchronous MSHMMs. MSHMMs, as proposed by several authors for use in AVSR, are jointly trained on both the audio and visual modalities. In contrast our proposed FHMM adaptation method can be used to adapt the multi-stream models from single-stream audio HMMs, and in the process, better model the video speech in the final model when compared to jointly-trained MSHMMs. By experiments conducted on the XM2VTS database we show that the improved video performance of the FHMM-adapted MSHMMs results in an improvement in AVSR performance over jointly-trained MSHMMs at all levels of audio noise, and provide significant advantage in high noise environments.

Full Paper

Bibliographic reference.  Dean, David / Lucey, Patrick / Sridharan, Sridha / Wark, Tim (2007): "Fused HMM-adaptation of multi-stream HMMs for audio-visual speech recognition", In INTERSPEECH-2007, 666-669.