Sixth International Conference on Spoken Language Processing
Discriminative training of hidden Markov models (HMMs) using segmental minimum classification error (MCE) training has been shown to work extremely well for certain speech recognition applications. It is, however, somewhat prone to overspecialization. This study investigates various techniques which improve performance and generalization of the MCE algorithm. Improvements of up to 7% in relative error rate on the test set are achieved.
Keywords: speech recognition, discriminative training, minimum classification error, overspecialization, overtraining
Bibliographic reference. Purnell, Darryl W. / Botha, Elizabeth C. (2000): "Improved performance and generalization of minimum classification error training for continuous speech recognition", In ICSLP-2000, vol.4, 165-168.