EUROSPEECH 2003 - INTERSPEECH 2003
8th European Conference on Speech Communication and Technology

Geneva, Switzerland
September 1-4, 2003

        

Maximum Entropy Good-Turing Estimator for Language Modeling

Juan P. Piantanida, Claudio F. Estienne

University of Buenos Aires, Argentina

In this paper, we propose a new formulation of the classical Good-Turing estimator for n-gram language model. The new approach is based on defining a dynamic model for language production. Instead of assuming a fixed probability distribution of occurrence of an n-gram on the whole text, we propose a maximum entropy approximation of a time varying distribution. This approximation led us to a new distribution, which in turn is used to calculate expectations of the Good-Turing estimator. This defines a new estimator that we call Maximum Entropy Good-Turing estimator. Contrary to the classical Good-Turing estimator it needs neither expectations approximations nor windowing or other smoothing techniques. It also contains the well know discounting estimators as special cases. Performance is evaluated both in terms of perplexity and word error rate in an N-best re-scoring task. Also comparison to other classical estimators is performed. In all cases our approach performs significantly better than classical estimators.

Full Paper

Bibliographic reference.  Piantanida, Juan P. / Estienne, Claudio F. (2003): "Maximum entropy good-turing estimator for language modeling", In EUROSPEECH-2003, 2277-2280.