7th International Conference on Spoken Language Processing

September 16-20, 2002
Denver, Colorado, USA

Hierarchical Gaussian Mixture Model for Speaker Verification

Ming Liu (1), Eric Chang (2), Bei-qian Dai (1)

(1) University of Science and Technology of China, China; (2) Microsoft Research Asia, China

A novel type of Gaussian mixture model for text-independent speaker verification, Hierarchical Gaussian Mixture Model (HGMM) is proposed in this paper. HGMM aims at maximizing the efficiency of MAP training on the Universal Background Model (UBM). Based on the hierarchical structure, the parameters of one Gaussian component can also be adapted by the observation vectors of neighboring Gaussian components. HGMM can also be considered as a generalized GMM which replaces the Gaussian component in the GMM models with a local GMM. The hierarchical Gaussian mixture description of the local observation space is better than one single Gaussian distribution. Experiment on NIST 99 Evaluation corpus shows that the HGMM achieves an 18% relative reduction in EER compared with the conventional GMM.


Full Paper

Bibliographic reference.  Liu, Ming / Chang, Eric / Dai, Bei-qian (2002): "Hierarchical Gaussian mixture model for speaker verification", In ICSLP-2002, 1353-1356.