FAAVSP - The 1st Joint Conference on
Facial Analysis, Animation, and
Appearance-based feature extraction constitutes the dominant approach for visual speech representation in a variety of problems, such as automatic speechreading, visual speech detection, and others. To obtain the necessary visual features, typically a rectangular region-of-interest (ROI) containing the speakers mouth is first extracted, followed, most commonly, by a discrete cosine transform (DCT) of the ROI pixel values and a feature selection step. The approach, although algorithmically simple and computationally efficient, suffers from lack of DCT invariance to typical ROI deformations, stemming, primarily, from speakers head pose variability and small tracking inaccuracies. To address the problem, in this paper, the recently introduced scattering transform is investigated as an alternative to DCT within the appearance-based framework for ROI representation, suitable for visual speech applications. A number of such tasks are considered, namely, visual-only speech activity detection, visual-only and audio-visual sub-phonetic classification, as well as audio-visual speech synchrony detection, all employing deep neural network classifiers with either DCT or scattering-based visual features. Comparative experiments of the resulting systems are conducted on a large audio-visual corpus of frontal face videos, demonstrating, in all cases, the scattering transform superiority over the DCT. Index Terms: scattering transform, discrete cosine transform, deep neural networks, visual speech activity detection, automatic speechreading, audio-visual synchrony.
Bibliographic reference. Marcheret, Etienne / Potamianos, Gerasimos / Vopicka, Josef / Goel, Vaibhava (2015): "Scattering vs. discrete cosine transform features in visual speech processing", In FAAVSP-2015, 175-180.