ISCA Archive Interspeech 2007
ISCA Archive Interspeech 2007

Fused HMM-adaptation of multi-stream HMMs for audio-visual speech recognition

David Dean, Patrick Lucey, Sridha Sridharan, Tim Wark

A technique known as fused hidden Markov models (FHMMs) was recently proposed as an alternative multi-stream modelling technique for audio-visual

speaker recognition. In this paper we show that for audio-visual speech recognition (AVSR), FHMMs can be adopted as a novel method of training synchronous MSHMMs. MSHMMs, as proposed by several authors for use in AVSR, are jointly trained on both the audio and visual modalities. In contrast our proposed FHMM adaptation method can be used to adapt the multi-stream models from single-stream audio HMMs, and in the process, better model the video speech in the final model when compared to jointly-trained MSHMMs. By experiments conducted on the XM2VTS database we show that the improved video performance of the FHMM-adapted MSHMMs results in an improvement in AVSR performance over jointly-trained MSHMMs at all levels of audio noise, and provide significant advantage in high noise environments.


doi: 10.21437/Interspeech.2007-285

Cite as: Dean, D., Lucey, P., Sridharan, S., Wark, T. (2007) Fused HMM-adaptation of multi-stream HMMs for audio-visual speech recognition. Proc. Interspeech 2007, 666-669, doi: 10.21437/Interspeech.2007-285

@inproceedings{dean07_interspeech,
  author={David Dean and Patrick Lucey and Sridha Sridharan and Tim Wark},
  title={{Fused HMM-adaptation of multi-stream HMMs for audio-visual speech recognition}},
  year=2007,
  booktitle={Proc. Interspeech 2007},
  pages={666--669},
  doi={10.21437/Interspeech.2007-285}
}