INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Efficient Machine Translation Decoding with Slow Language Models

Ahmad Emami

IBM T.J. Watson Research Center, USA

Efficient decoding has been a fundamental problem in machine translation research. Usually a significant part of the computational complexity is found in the language model cost computations. If slow language models, such as neural network or maximum-entropy models are used, the computational complexity can be so high as to render decoding impractical. In this paper we propose a method to efficiently integrate slow language models in machine translation decoding. We specifically employ neural network language models in a hierarchical phrase-based translation decoder and achieve more than 15 times speed-up versus directly integrating the neural network models. The speed-up is achieved without any noticeable drop in machine translation output quality, as measured by automatic evaluation metrics. Our proposed method is general enough to be applied to a wide variety of models and decoders.

Full Paper

Bibliographic reference.  Emami, Ahmad (2015): "Efficient machine translation decoding with slow language models", In INTERSPEECH-2015, 2376-2379.