Sixth European Conference on Speech Communication and Technology
We show how discriminative training methods, namely the Maximum Mutual Information and Maximum Discrimination approach, can be adopted for the training of N-gram language models used as classifiers working on symbol strings. By estimating the model parameters according to a discriminative objective function instead of Maximum Likelihood, the emphasis is not put on the exact modeling of each class, but on the right classification of the samples. The methods are shown to be suited for a variety of applications, such as the recognition of regulatory DNA sequences and language identification. Using phonotactic information, we achieve an error reduction of 10.7% (phoneme sequences) or 41.9% (codebook classes) with respect to the standard ML estimation on a corpus of English and German sentences.
Full Paper (PDF) Gnu-Zipped Postscript
Bibliographic reference. Ohler, Uwe / Harbeck, Stefan / Niemann, Heinrich (1999): "Discriminative training of language model classifiers", In EUROSPEECH'99, 1607-1610.