5th International Conference on Spoken Language Processing
We present two powerful tools which allow efficient training of arbitrary (including mixed and infinite) order hidden Markov models. The method rests on two parts: an algorithm which can convert high-order models to an equivalent first-order representation (ORder rEDucing), and a Fast (order) Incremental Training algorithm. We demonstrate that this method is more flexible, results in significantly faster training and improved generalisation compared to prior work. Order reducing is also shown to give insight into the language modelling capabilities of certain high-order HMM topologies.
Bibliographic reference. Preez, J. A. du / Weber, D. M. (1998): "Efficient high-order hidden Markov modelling", In ICSLP-1998, paper 1073.