Despite the availability of better performing techniques, most language models are trained using popular toolkits that do not support perplexity optimization. In this work, we present an efficient data structure and optimized algorithms specifically designed for iterative parameter tuning. With the resulting implementation, we demonstrate the feasibility and effectiveness of such iterative techniques in language model estimation.
Bibliographic reference. Hsu, Bo-June / Glass, James (2008): "Iterative language model estimation: efficient data structure & algorithms", In INTERSPEECH-2008, 841-844.