12th Annual Conference of the International Speech Communication Association

Florence, Italy
August 27-31. 2011

Direct Error Rate Minimization of Hidden Markov Models

Joseph Keshet (1), Chih-Chieh Cheng (2), Mark Stoehr (3), David McAllester (1)

(1) Toyota Technological Institute at Chicago, USA
(2) University of California at San Diego, USA
(3) University of Chicago, USA

We explore discriminative training of HMM parameters that directly minimizes the expected error rate. In discriminative training one is interested in training a system to minimize a desired error function, like word error rate, phone error rate, or frame error rate. We review a recent method (McAllester, Hazan and Keshet, 2010), which introduces an analytic expression for the gradient of the expected error-rate. The analytic expression leads to a perceptron-like update rule, which is adapted here for training of HMMs in an online fashion. While the proposed method can work with any type of the error function used in speech recognition, we evaluated it on phoneme recognition of TIMIT, when the desired error function used for training was frame error rate. Except for the case of GMM with a single mixture per state, the proposed update rule provides lower error rates, both in terms of frame error rate and phone error rate, than other approaches, including MCE and large margin.

Full Paper

Bibliographic reference.  Keshet, Joseph / Cheng, Chih-Chieh / Stoehr, Mark / McAllester, David (2011): "Direct error rate minimization of hidden Markov models", In INTERSPEECH-2011, 449-452.