Sixth International Conference on Spoken Language Processing (ICSLP 2000)

Beijing, China
October 16-20, 2000

An Embedded Knowledge Integration for Hybrid Language Modelling

Shuwu Zhang, Hirofami Yamamoto, Yoshinori Sagisaka

ATR Spoken Language Translation Labs, Kyoto, Japan

This paper describes an embedded architectnre to couple utilizable language knowledge and innovative language models, as well as modeling approaches, for intensive language modeling in speech recognition. In this embedded mechanism, three innovative language modeling approaches at different levels, i.e composite N-gram, distance-related unit association maximnm entropy (DU-AME), and linkgram have different functions to extend the definitions of basic language units, favorably improve the underlying model instead of conventional N-grams, and provide effective combination with longer history syntactic link dependency knowledge, respectively.

In this three-level hybrid language modeling, each lower level modeling serves the higher level modeling(s). The results in each level are well utilized or embedded in the higher level(s). These models can be trained level by level. Accordingly, some prospective language constraints can finally be embedded in a well-organized hybrid model.

Experimental data based on the embedded modeling show that the hybrid model reduces WER 14.5% compared with the conventional word-based bigram model. As a result, it can be expected to improve the conventional statistical language modeling.


Full Paper

Bibliographic reference.  Zhang, Shuwu / Yamamoto, Hirofami / Sagisaka, Yoshinori (2000): "An embedded knowledge integration for hybrid language modelling", In ICSLP-2000, vol.1, 182-195.