EUROSPEECH 2001 Scandinavia
In this paper, a new language model, the Multi-Class Composite Ngram, is proposed to avoid a data sparseness problem in small amount of training data. The Multi-Class Composite N-gram maintains an accurate word prediction capability and reliability for sparse data with a compact model size based on multiple word clusters, so-called Multi-Classes. In the Multi-Class, the statistical connectivity at each position of the N-grams is regarded as word attributes, and one word cluster each is created to represent positional attributes. Furthermore, by introducing higher order word N-grams through the grouping of frequent word successions, Multi-Class N-grams are extended to Multi-Class Composite N-grams. In experiments, the Multi-Class Composite Ngrams result in 9.5% lower perplexity and a 16% lower word error rate in speech recognition with a 40% smaller parameter size than conventional word 3-grams.
Bibliographic reference. Isogai, Shuntaro / Shirai, Katsuhiko / Yamamoto, Hirofumi / Sagisaka, Yoshinori (2001): "Multi-class composite n-gram language model using multiple word clusters and word successions", In EUROSPEECH-2001, 25-28.