Data Selection for Within-Class Covariance Estimation

Elliot Singer, Tyler Campbell, Douglas Reynolds


Methods for performing channel and session compensation in conjunction with subspace techniques have been a focus of considerable study recently and have led to significant gains in speaker recognition performance. While developers have typically exploited the vast archive of speaker labeled data available from earlier NIST evaluations to train the within-class and across-class covariance matrices required by these techniques, little attention has been paid to the characteristics of the data required to perform the training efficiently. This paper focuses on within-class covariance normalization (WCCN) and shows that a reduction in training data requirements can be achieved by proper data selection. In particular, it is shown that the key variables are the total amount of data and the degree of handset variability, with total calls per handset playing a smaller role. The study offers insight into efficient WCCN training data collection in real world applications.


DOI: 10.21437/Interspeech.2016-1282

Cite as

Singer, E., Campbell, T., Reynolds, D. (2016) Data Selection for Within-Class Covariance Estimation. Proc. Interspeech 2016, 1805-1809.

Bibtex
@inproceedings{Singer+2016,
author={Elliot Singer and Tyler Campbell and Douglas Reynolds},
title={Data Selection for Within-Class Covariance Estimation},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1282},
url={http://dx.doi.org/10.21437/Interspeech.2016-1282},
pages={1805--1809}
}