We introduce a new framework employing statistical language models (SLMs) for spoken dialog systems that facilitates the dynamic update of word probabilities based on dialog history. In combination with traditional state-dependent SLMs, we use a Bayesian Network to capture dependencies between user goal concepts and compute accurate distributions over words that express these concepts. This allows the framework to exploit information provided by the user in previous turns to predict the value of the unobserved concepts. We evaluate this approach on a large corpus of publicly available dialogs from the CMU Let's Go bus information system, and show that our approach significantly improves concept understanding precision over purely state-dependent SLMs.
Bibliographic reference. Raux, Antoine / Mehta, Neville / Ramachandran, Deepak / Gupta, Rakesh (2010): "Dynamic language modeling using Bayesian networks for spoken dialog systems", In INTERSPEECH-2010, 3030-3033.