INTERSPEECH 2009
10th Annual Conference of the International Speech Communication Association

Brighton, United Kingdom
September 6-10, 2009

A Semi-Supervised Version of Heteroscedastic Linear Discriminant Analysis

Haolang Zhou, Damianos Karakos, Andreas G. Andreou

Johns Hopkins University, USA

Heteroscedastic Linear Discriminant Analysis (HLDA) was introduced in [1] as an extension of Linear Discriminant Analysis to the case where the class-conditional distributions have unequal covariances. The HLDA transform is computed such that the likelihood of the training (labeled) data is maximized, under the constraint that the projected distributions are orthogonal to a nuisance space that does not offer any discrimination. In this paper we consider the case of semi-supervised learning, where a large amount of unlabeled data is also available. We derive update equations for the parameters of the projected distributions, which are estimated jointly with the HLDA transform, and we empirically compare it with the case where no unlabeled data are available. Experimental results with synthetic data and real data from a vowel recognition task show that, in most cases, semi-supervised HLDA results in improved performance over HLDA.

Reference

  1. N. Kumar and A. G. Andreou, “Heteroscedastic discriminant analysis and reduced rank HMMs for improved speech recognition,” Speech Comm., vol. 26, pp. 283–297, 1998.

Full Paper

Bibliographic reference.  Zhou, Haolang / Karakos, Damianos / Andreou, Andreas G. (2009): "A semi-supervised version of heteroscedastic linear discriminant analysis", In INTERSPEECH-2009, 848-851.