INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Viseme-Dependent Weight Optimization for CHMM-Based Audio-Visual Speech Recognition

Alexey Karpov (1), Andrey Ronzhin (1), Konstantin Markov (2), Miloš Železný (3)

(1) Russian Academy of Sciences, Russia
(2) University of Aizu, Japan
(3) University of West Bohemia, Czech Republic

The aim of the present study is to investigate some key challenges of the audio-visual speech recognition technology, such as asynchrony modeling of multimodal speech, estimation of auditory and visual speech significance, as well as stream weight optimization. Our research shows that the use of viseme-dependent significance weights improves the performance of state asynchronous CHMM-based speech recognizer. In addition, for a state synchronous MSHMM-based recognizer, fewer errors can be achieved using stationary time delays of visual data with respect to the corresponding audio signal. Evaluation experiments showed that individual audio-visual stream weights for each viseme-phoneme pair lead to relative reduction of WER by 20%.

Full Paper

Bibliographic reference.  Karpov, Alexey / Ronzhin, Andrey / Markov, Konstantin / Železný, Miloš (2010): "Viseme-dependent weight optimization for CHMM-based audio-visual speech recognition", In INTERSPEECH-2010, 2678-2681.