INTERSPEECH 2004 - ICSLP
Optimizing discriminative objectives in HMM parameter training proved to outperform Maximum Likelihood-based parameter estimation in numerous studies. This paper extends the Maximum Mutual Information objective by applying utterance specific weighting factors that are adjusted for minimum sentence error. In addition to that, the paper investigates tuning separate numerator and denominator weighting factors in a way that favors Maximum Likelihood parameter estimates for reasons of stability and generalization on unseen data. The experimental evaluations carried out on German digit string data show that the Error-Weighted Maximum Mutual Information approach has the potential of outperforming ordinary discriminative parameter estimates. In our experiments, we see a substantially larger word error rate reduction compared to conventional MMI training. Following the ML-preferred error-weighted discriminative training approach, we see another small improvement.
Bibliographic reference. Willett, Daniel (2004): "Error - weighted discriminative training for HMM parameter estimation", In INTERSPEECH-2004, 1661-1664.