8th European Conference on Speech Communication and Technology

Geneva, Switzerland
September 1-4, 2003


Maximum Conditional Mutual Information Projection for Speech Recognition

Mohamed Kamal Omar, Mark Hasegawa-Johnson

University of Illinois at Urbana-Champaign, USA

Linear discriminant analysis (LDA) in its original model-free formulation is best suited to classification problems with equal-covariance classes. Heteroscedastic discriminant analysis (HDA) removes this equal covariance constraint, and therefore is more suitable for automatic speech recognition (ASR) systems. However, maximizing HDA objective function does not correspond directly to minimizing the recognition error. In its original formulation, HDA solves a maximum likelihood estimation problem in the original feature space to calculate the HDA transformation matrix. Since the dimension of the original feature space in ASR problems is usually high, the estimation of the HDA transformation matrix becomes computationally expensive and requires a large amount of training data. This paper presents a generalization of LDA that solves these two problems. We start with showing that the calculation of the LDA projection matrix is a maximum mutual information estimation problem in the lower-dimensional space with some constraints on the model of the joint conditional and unconditional probability density functions (PDF) of the features, and then, by relaxing these constraints, we develop a dimensionality reduction approach that maximizes the conditional mutual information between the class identity and the feature vector in the lower-dimensional space given the recognizer model. Using this approach, we achieved 1% improvement in phoneme recognition accuracy compared to the baseline system. Improvement in recognition accuracy compared to both LDA and HDA approaches is also achieved.

Full Paper

Bibliographic reference.  Omar, Mohamed Kamal / Hasegawa-Johnson, Mark (2003): "Maximum conditional mutual information projection for speech recognition", In EUROSPEECH-2003, 505-508.