INTERSPEECH 2010
11th Annual Conference of the International Speech Communication Association

Makuhari, Chiba, Japan
September 26-30. 2010

Efficient Estimation of Maximum Entropy Language Models with N-Gram Features: An SRILM Extension

Tanel Alumäe (1), Mikko Kurimo (2)

(1) Tallinn University of Technology, Estonia
(2) Aalto University, Finland

We present an extension to the SRILM toolkit for training maximum entropy language models with N-gram features. The extension uses a hierarchical parameter estimation procedure for making the training time and memory consumption feasible for moderately large training data (hundreds of millions of words). Experiments on two speech recognition tasks indicate that the models trained with our implementation perform equally to or better than N-gram models built with interpolated Kneser-Ney discounting.

Full Paper

Bibliographic reference.  Alumäe, Tanel / Kurimo, Mikko (2010): "Efficient estimation of maximum entropy language models with n-gram features: an SRILM extension", In INTERSPEECH-2010, 1820-1823.