Heteroscedastic Linear Discriminant Analysis (HLDA) was introduced in  as an extension of Linear Discriminant Analysis to the case where the class-conditional distributions have unequal covariances. The HLDA transform is computed such that the likelihood of the training (labeled) data is maximized, under the constraint that the projected distributions are orthogonal to a nuisance space that does not offer any discrimination. In this paper we consider the case of semi-supervised learning, where a large amount of unlabeled data is also available. We derive update equations for the parameters of the projected distributions, which are estimated jointly with the HLDA transform, and we empirically compare it with the case where no unlabeled data are available. Experimental results with synthetic data and real data from a vowel recognition task show that, in most cases, semi-supervised HLDA results in improved performance over HLDA.
Bibliographic reference. Zhou, Haolang / Karakos, Damianos / Andreou, Andreas G. (2009): "A semi-supervised version of heteroscedastic linear discriminant analysis", In INTERSPEECH-2009, 848-851.