Natural language generation for task-oriented dialogue systems aims to effectively realize system dialogue actions. All natural language generators (NLGs) must realize grammatical, natural and appropriate output, but in addition, generators for task-oriented dialogue must faithfully perform a specific dialogue act that conveys specific semantic information, as dictated by the dialogue policy of the system dialogue manager. Most previous work on deep learning methods for task-oriented NLG assumes that generation output can be an utterance skeleton. Utterances are delexicalized, with variable names for slots, which are then replaced with actual values as part of post-processing. However, the value of slots do, in fact, influence the lexical selection in the surrounding context as well as the overall sentence plan. To model this effect, we investigate sequence-to-sequence (seq2seq) models in which slot values are included as part of the input sequence and the output surface form. Furthermore, we study whether a separate sentence planning module that decides on grouping of slot value mentions as input to the seq2seq model results in more natural sentences than a seq2seq model that aims to jointly learn the plan and the surface realization.
Cite as: Nayak, N., Hakkani-Tür, D., Walker, M., Heck, L. (2017) To Plan or not to Plan? Discourse Planning in Slot-Value Informed Sequence to Sequence Models for Language Generation. Proc. Interspeech 2017, 3339-3343, doi: 10.21437/Interspeech.2017-1525
@inproceedings{nayak17_interspeech, author={Neha Nayak and Dilek Hakkani-Tür and Marilyn Walker and Larry Heck}, title={{To Plan or not to Plan? Discourse Planning in Slot-Value Informed Sequence to Sequence Models for Language Generation}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={3339--3343}, doi={10.21437/Interspeech.2017-1525} }