INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Lipreading using Convolutional Neural Network

Kuniaki Noda (1), Yuki Yamaguchi (2), Kazuhiro Nakadai (3), Hiroshi G. Okuno (2), Tetsuya Ogata (1)

(1) Waseda University, Japan
(2) Kyoto University, Japan
(3) Honda Research Institute Japan, Japan

In recent automatic speech recognition studies, deep learning architecture applications for acoustic modeling have eclipsed conventional sound features such as Mel-frequency cepstral coefficients. However, for visual speech recognition (VSR) studies, handcrafted visual feature extraction mechanisms are still widely utilized. In this paper, we propose to apply a convolutional neural network (CNN) as a visual feature extraction mechanism for VSR. By training a CNN with images of a speaker's mouth area in combination with phoneme labels, the CNN acquires multiple convolutional filters, used to extract visual features essential for recognizing phonemes. Further, by modeling the temporal dependencies of the generated phoneme label sequences, a hidden Markov model in our proposed system recognizes multiple isolated words. Our proposed system is evaluated on an audio-visual speech dataset comprising 300 Japanese words with six different speakers. The evaluation results of our isolated word recognition experiment demonstrate that the visual features acquired by the CNN significantly outperform those acquired by conventional dimensionality compression approaches, including principal component analysis.

Full Paper

Bibliographic reference.  Noda, Kuniaki / Yamaguchi, Yuki / Nakadai, Kazuhiro / Okuno, Hiroshi G. / Ogata, Tetsuya (2014): "Lipreading using convolutional neural network", In INTERSPEECH-2014, 1149-1153.