We describe a procedure for contextual interpretation of spoken sentences within dialogs. Task structure is represented in a graphical form, enabling the interpreter algorithm to be efficient and task-independent. Recognized spoken input may consist either of a single sentence with utterance-verification scores, or of a word lattice with arc weights. A confidence model is used throughout and all inferences are probability-weighted. The interpretation consists of a probability for each class and for each auxiliary information label needed for task completion. Anaphoric references are permitted.
Cite as: Wright, J.H., Gorin, A.L., Abella, A. (1998) Spoken language understanding within dialogs using a graphical model of task structure. Proc. 5th International Conference on Spoken Language Processing (ICSLP 1998), paper 0385, doi: 10.21437/ICSLP.1998-507
@inproceedings{wright98b_icslp, author={Jeremy H. Wright and Allen L. Gorin and Alicia Abella}, title={{Spoken language understanding within dialogs using a graphical model of task structure}}, year=1998, booktitle={Proc. 5th International Conference on Spoken Language Processing (ICSLP 1998)}, pages={paper 0385}, doi={10.21437/ICSLP.1998-507} }