Between-Class Covariance Correction For Linear Discriminant Analysis in Language Recognition

Abhinav Misra, Qian Zhang, Finnian Kelly, John H.L. Hansen


Linear Discriminant Analysis (LDA) is one of the most widely-used channel compensation techniques in current speaker and language recognition systems. In this study, we propose a technique of Between-Class Covariance Correction (BCC) to improve language recognition performance. This approach builds on the idea of Within-Class Covariance Correction (WCC), which was introduced as a means to compensate for mismatch between development and test data in speaker recognition. In BCC, we compute eigendirections representing the multi-modal distributions of language i-vectors, and show that incorporating these directions in LDA leads to an improvement in recognition performance. Considering each cluster in the multi-modal i-vector distribution as a separate class, the between- and within-cluster covariance matrices are used to update the global between-language covariance. This is in contrast to WCC, for which the within-class covariance is updated. Using the proposed method, a relative overall improvement of +8.4% Equal Error Rate (EER) is obtained on the 2015 NIST Language Recognition Evaluation (LRE) data. Our approach offers insights toward addressing the challenging problem of mismatch compensation, which has much wider applications in both speaker and language recognition.


DOI: 10.21437/Odyssey.2016-10

Cite as

Misra, A., Zhang, Q., Kelly, F., Hansen, J.H. (2016) Between-Class Covariance Correction For Linear Discriminant Analysis in Language Recognition. Proc. Odyssey 2016, 68-73.

Bibtex
@inproceedings{Misra+2016,
author={Abhinav Misra and Qian Zhang and Finnian Kelly and John H.L. Hansen},
title={Between-Class Covariance Correction For Linear Discriminant Analysis in Language Recognition},
year=2016,
booktitle={Odyssey 2016},
doi={10.21437/Odyssey.2016-10},
url={http://dx.doi.org/10.21437/Odyssey.2016-10},
pages={68--73}
}