In this paper, the relevance factor in maximum a posteriori (MAP) adaptation of Gaussian mixture model (GMM) from universal background model (UBM) is studied for language recognition. In conventional MAP, relevance factor is typically set as a constant empirically. Knowing that relevance factor determines how much the observed training data influence the model adaptation, thus the resulting GMM models, we believe that the relevance factor should be dependent to the data for more effective modeling. We formulate the estimation of relevance factor in a systematic manner and study its role in characterizing spoken languages with supervectors. We use a Bhattacharyya-based language recognition system on National Institute of Standards and Technology (NIST) language recognition evaluation (LRE) 2009 task to investigate the validate of the data-dependent relevance factor. Experimental results show that we achieve improved performance by using the proposed relevance factor.
Bibliographic reference. You, Chang Huai / Li, Haizhou / Lee, Kong Aik (2011): "Study on the relevance factor of maximum a posteriori with GMM for language recognition", In INTERSPEECH-2011, 2893-2896.