Auditory-Visual Speech Processing (AVSP) 2013

Annecy, France
August 29 - September 1, 2013

GMM Mapping Of Visual Features of Cued Speech From Speech Spectral Features

Zuheng Ming, Denis Beautemps, Gang Feng

Univ. Grenoble Alpes, GIPSA-lab, and CNRS, UMR 5216, Grenoble, France

In this paper, we present a statistical method based on GMM modeling to map the acoustic speech spectral features to visual features of Cued Speech in the regression criterion of Minimum Mean-Square Error (MMSE) in a low signal level which is innovative and different with the classic text-to-visual approach. Two different training methods for GMM, namely Expecting-Maximization (EM) approach and supervised training method were discussed respectively. In comparison with the GMM based mapping modeling we first present the results with the use of a Multiple-Linear Regression (MLR) model also at the low signal level and study the limitation of the approach. The experimental results demonstrate that the GMM based mapping method can significantly improve the mapping performance compared with the MLR mapping model especially in the sense of the weak linear correlation between the target #and the predictor such as the hand positions of Cued Speech and the acoustic speech spectral features.

Index Terms: Cued Speech, LSP, MFCC, GMM mapping.

Full Paper

Bibliographic reference.  Ming, Zuheng / Beautemps, Denis / Feng, Gang (2013): "GMM mapping of visual features of cued speech from speech spectral features", In AVSP-2013, 191-196.