Odyssey 2010: The Speaker and Language Recognition Workshop

Brno, Czech Republic
28 June 1 July 2010

Training Universal Background Models for Speaker Recognition

Mohamed Omar, Jason Pelecanos (1)

(1) IBM T. J. Watson Research Center

Universal background models (UBM) in speaker recognition systems are typically Gaussian mixture models (GMM) trained from a large amount of data using the maximum likelihood criterion. This paper investigates three alternative criteria for training the UBM. In the first, we cluster an existing automatic speech recognition (ASR) acoustic model to generate the UBM. In each of the other two, we use statistics based on the speaker labels of the development data to regularize the maximum likelihood objective function in training the UBM. We present an iterative algorithm similar to the expectation maximization (EM) algorithm to train the UBM for each of these regularized maximum likelihood criteria. We present several experiments that show how combining only two systems outperforms the best published results on the English telephone tasks of the NIST 2008 speaker recognition evaluation.

Full Paper (PDF)

Bibliographic reference.  Omar, Mohamed / Pelecanos, Jason (2010): "Training Universal Background Models for Speaker Recognition", In Odyssey-2010, paper 010.