EUROSPEECH 2001 Scandinavia
7th European Conference on Speech Communication and Technology
2nd INTERSPEECH Event

Aalborg, Denmark
September 3-7, 2001

                 

Quantization-Based Language Model Compression

E. W. D. Whittaker, Bhiksha Raj

Compaq Cambridge Research Laboratory, USA

This paper describes two techniques for reducing the size of statistical back-off N-gram language models in computer memory. Language model compression is achieved through a combination of quantizing language model probabilities and back-off weights and the pruning of parameters that are determined to be unnecessary after quantization. The recognition performance of the original and compressed language models is evaluated across three different language models and two different recognition tasks. The results show that the language models can be compressed by up to 60% of their original size with no significant loss in recognition performance. Moreover, the techniques that are described provide a principled method with which to compress language models further while minimising degradation in recognition performance.

Full Paper

Bibliographic reference.  Whittaker, E. W. D. / Raj, Bhiksha (2001): "Quantization-based language model compression", In EUROSPEECH-2001, 33-36.