ISCA Archive Interspeech 2007
ISCA Archive Interspeech 2007

Minimum rank error training for language modeling

Meng-Sung Wu, Jen-Tzung Chien

Discriminative training techniques have been successfully developed for many pattern recognition applications. In speech recognition, discriminative training aims to minimize the metric of word error rate. However, in an information retrieval system, the best performance should be achieved by maximizing the average precision. In this paper, we construct the discriminative n-gram language model for information retrieval following the metric of minimum rank error (MRE) rather than the conventional metric of minimum classification error. In the optimization procedure, we maximize the average precision and estimate the language model towards attaining the smallest ranking loss. In the experiments on ad-hoc retrieval using TREC collections, the proposed MRE language model performs better than the maximum likelihood and the minimum classification error language models.


doi: 10.21437/Interspeech.2007-263

Cite as: Wu, M.-S., Chien, J.-T. (2007) Minimum rank error training for language modeling. Proc. Interspeech 2007, 614-617, doi: 10.21437/Interspeech.2007-263

@inproceedings{wu07c_interspeech,
  author={Meng-Sung Wu and Jen-Tzung Chien},
  title={{Minimum rank error training for language modeling}},
  year=2007,
  booktitle={Proc. Interspeech 2007},
  pages={614--617},
  doi={10.21437/Interspeech.2007-263}
}