This paper presents, for the first time, unsupervised discriminative training of probabilistic linear discriminant analysis (unsupervised DT-PLDA). While discriminative training avoids the problem of generative training based on probabilistic model assumptions that often do not agree with actual data, it has been difficult to apply it to unsupervised scenarios because it can fit data with almost any labels. This paper focuses on unsupervised training of DT-PLDA in the application of domain adaptation in i-vector based speaker verification systems, using unlabeled in-domain data. The proposed method makes it possible to conduct discriminative training, i.e., estimation of model parameters and unknown labels, by employing data statistics as a regularization term in addition to the original objective function in DT-PLDA. An experiment on a NIST Speaker Recognition Evaluation task shows that the proposed method outperforms a conventional method using speaker clustering and performs almost as well as supervised DT-PLDA.
Cite as: Wang, Q., Koshinaka, T. (2017) Unsupervised Discriminative Training of PLDA for Domain Adaptation in Speaker Verification. Proc. Interspeech 2017, 3727-3731, doi: 10.21437/Interspeech.2017-727
@inproceedings{wang17l_interspeech, author={Qiongqiong Wang and Takafumi Koshinaka}, title={{Unsupervised Discriminative Training of PLDA for Domain Adaptation in Speaker Verification}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={3727--3731}, doi={10.21437/Interspeech.2017-727} }