Sixth International Conference on Spoken Language Processing
(ICSLP 2000)

Beijing, China
October 16-20, 2000

Improved Performance and Generalization of Minimum Classification Error Training for Continuous Speech Recognition

Darryl W. Purnell, Elizabeth C. Botha

Department of Electrical and Electronic Engineering, University of Pretoria, Pretoria, South Africa

Discriminative training of hidden Markov models (HMMs) using segmental minimum classification error (MCE) training has been shown to work extremely well for certain speech recognition applications. It is, however, somewhat prone to overspecialization. This study investigates various techniques which improve performance and generalization of the MCE algorithm. Improvements of up to 7% in relative error rate on the test set are achieved.

Keywords: speech recognition, discriminative training, minimum classification error, overspecialization, overtraining


Full Paper

Bibliographic reference.  Purnell, Darryl W. / Botha, Elizabeth C. (2000): "Improved performance and generalization of minimum classification error training for continuous speech recognition", In ICSLP-2000, vol.4, 165-168.