The aim of the present study is to investigate some key challenges of the audio-visual speech recognition technology, such as asynchrony modeling of multimodal speech, estimation of auditory and visual speech significance, as well as stream weight optimization. Our research shows that the use of viseme-dependent significance weights improves the performance of state asynchronous CHMM-based speech recognizer. In addition, for a state synchronous MSHMM-based recognizer, fewer errors can be achieved using stationary time delays of visual data with respect to the corresponding audio signal. Evaluation experiments showed that individual audio-visual stream weights for each viseme-phoneme pair lead to relative reduction of WER by 20%.
Bibliographic reference. Karpov, Alexey / Ronzhin, Andrey / Markov, Konstantin / Železný, Miloš (2010): "Viseme-dependent weight optimization for CHMM-based audio-visual speech recognition", In INTERSPEECH-2010, 2678-2681.