Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Investigations on Error Minimizing Training Criteria for Discriminative Training in Automatic Speech Recognition

Wolfgang Macherey, Lars Haferkamp, Ralf Schlüter, Hermann Ney

RWTH Aachen University, Germany

Discriminative training criteria have been shown to consistently outperform maximum likelihood trained speech recognition systems. In this paper we employ the Minimum Classification Error (MCE) criterion to optimize the parameters of the acoustic model of a large scale speech recognition system. The statistics for both the correct and the competing model are solely collected on word lattices without the use of N-best lists. Thus, particularly for long utterances, the number of sentence alternatives taken into account is significantly larger compared to N-best lists. The MCE criterion is embedded in an extended unifying approach for a class of discriminative training criteria which allows for direct comparison of the performance gain obtained with the improvements of other commonly used criteria such as Maximum Mutual Information (MMI) and Minimum Word Error (MWE). Experiments conducted on large vocabulary tasks show a consistent performance gain for MCE over MMI. Moreover, the improvements obtained with MCE turn out to be in the same order of magnitude as the performance gains obtained with the MWE criterion.

Full Paper

Bibliographic reference.  Macherey, Wolfgang / Haferkamp, Lars / Schlüter, Ralf / Ney, Hermann (2005): "Investigations on error minimizing training criteria for discriminative training in automatic speech recognition", In INTERSPEECH-2005, 2133-2136.