Emission probability distributions in speech recognition have been traditionally associated to continuous random variables. The most successful models have been the mixtures of Gaussians in the states of the hidden Markov models to generate/ capture observations. In this work we show how graphical models can be used to extract the joint information of more than two features. This is possible if we previously quantize the speech features to a small number of levels and model them as discrete random variables. In this paper it is shown a method to estimate a graphical model with a bounded number of dependencies, which is a subset of the directed acyclic graph based model framework, Bayesian networks. Some experimental results have been obtained with mixtures of graphical models compared to baseline systems using mixtures of Gaussians with full and diagonal covariance matrices.
Bibliographic reference. Miguel, Antonio / Ortega, Alfonso / Buera, L. / Lleida, Eduardo (2009): "Graphical models for discrete hidden Markov models in speech recognition", In INTERSPEECH-2009, 1411-1414.