In this work automatic methods for determining the number of gaussians per state in a set of Hidden Markov Models are studied. Four different mix-up criteria are proposed to decide how to increase the size of the states. These criteria, derived from Maximum Likelihood scores, are focused to increase the discrimination between states obtaining different number of gaussians per state. We compare these proposed methods with the common approach where the number of density functions used in every state is equal and pre-fixed by the designer. Experimental results demonstrate that performance can be maintained while reducing the total number of density functions by 17% (from 2046 down to 1705). These results are obtained in a flexible large vocabulary isolated word recognizer using context dependent models.
Cite as: Martin del Alamo, C., Villarrubia, L., Gonzalez, F.J., Hernández, L.A. (1998) Unsupervised training of HMMs with variable number of mixture components per state. Proc. 5th International Conference on Spoken Language Processing (ICSLP 1998), paper 0443, doi: 10.21437/ICSLP.1998-186
@inproceedings{martindelalamo98_icslp, author={Cesar {Martin del Alamo} and Luis Villarrubia and Francisco Javier Gonzalez and Luis A. Hernández}, title={{Unsupervised training of HMMs with variable number of mixture components per state}}, year=1998, booktitle={Proc. 5th International Conference on Spoken Language Processing (ICSLP 1998)}, pages={paper 0443}, doi={10.21437/ICSLP.1998-186} }