Sixth European Conference on Speech Communication and Technology

Budapest, Hungary
September 5-9, 1999

A Missing-word Test Comparison of Human and Statistical Language Model Performance

Marie Owens, Anja Krüger, Paul Donnelly, F J Smith, Ji Ming

School of Computer Science, The Queen’s University of Belfast, Belfast, Northern Ireland, UK

A suite of missing-word tests based on text extracts selected randomly from two different text corpora provided a metric which was used in an evaluation of human performance, an evaluation of language model performance and a cross-comparison of the performances. The effects of providing different sizes of context for the missing word (ranging from two words to three sentences) were examined and two main patterns became clear from the results: - surprisingly, for tests where the language model was able to take advantage of all the context information provided (i.e. where the context consisted of just a few words) it outperformed humans; - conversely, humans outperformed the language model when the size of context given for the missing word exceeded the size, which the language model could usefully, employ in its probability calculations (typically more than six words).

Full Paper (PDF)   Gnu-Zipped Postscript

Bibliographic reference.  Owens, Marie / Krüger, Anja / Donnelly, Paul / Smith, F J / Ming, Ji (1999): "A missing-word test comparison of human and statistical language model performance", In EUROSPEECH'99, 145-148.