16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Parallel Inference of Dirichlet Process Gaussian Mixture Models for Unsupervised Acoustic Modeling: A Feasibility Study

Hongjie Chen (1), Cheung-Chi Leung (2), Lei Xie (1), Bin Ma (2), Haizhou Li (2)

(1) Northwestern Polytechnical University, China
(2) A*STAR, Singapore

We adopt a Dirichlet process Gaussian mixture model (DPGMM) for unsupervised acoustic modeling and represent speech frames with Gaussian posteriorgrams. The model performs unsupervised clustering on untranscribed data, and each Gaussian component can be considered as a cluster of sounds from various speakers. The model infers its model complexity (i.e. the number of Gaussian components) from the data. For computation efficiency, we use a parallel sampler for the model inference. Our experiments are conducted on the corpus provided by the zero resource speech challenge. Experimental results show that the unsupervised DPGMM posteriorgrams obviously outperform MFCC, and perform comparably to the posteriorgrams derived from language-mismatched phoneme recognizers in terms of the error rate of ABX discrimination test. The error rates can be further reduced by the fusion of these two kinds of posteriorgrams.

Full Paper

Bibliographic reference.  Chen, Hongjie / Leung, Cheung-Chi / Xie, Lei / Ma, Bin / Li, Haizhou (2015): "Parallel inference of dirichlet process Gaussian mixture models for unsupervised acoustic modeling: a feasibility study", In INTERSPEECH-2015, 3189-3193.