Discriminative model combination is to integrate several model scores using discriminatively trained weighting factors. In recent research, context-dependent scaling is often applied. One limitation of this approach is a large number of parameters will be introduced. The large parameter set with limited training data might introduce training instability. In this paper, we propose to use automatically induced contexts modeled by phonetic decision trees. Questions in the tree nodes are chosen to maximize the minimum phone error criterion. First order approximation of objective increase is used for question selection to make tree growing efficient. Experimental results on continuous speech recognition show the method is capable of inducing crucial phonetic contexts and obtains error reduction with many fewer parameters, compared with the results from manually selected phonetic contexts.
Bibliographic reference. Huang, Hao / Li, Bing Hu (2011): "Lattice based discriminative model combination using automatically induced phonetic contexts", In INTERSPEECH-2011, 1941-1944.