Towards Zero-Shot Frame Semantic Parsing for Domain Scaling

Ankur Bapna, Gokhan Tür, Dilek Hakkani-Tür, Larry Heck


State-of-the-art slot filling models for goal-oriented human/ machine conversational language understanding systems rely on deep learning methods. While multi-task training of such models alleviates the need for large in-domain annotated datasets, bootstrapping a semantic parsing model for a new domain using only the semantic frame, such as the back-end API or knowledge graph schema, is still one of the holy grail tasks of language understanding for dialogue systems. This paper proposes a deep learning based approach that can utilize only the slot description in context without the need for any labeled or unlabeled in-domain examples, to quickly bootstrap a new domain. The main idea of this paper is to leverage the encoding of the slot names and descriptions within a multi-task deep learned slot filling model, to implicitly align slots across domains. The proposed approach is promising for solving the domain scaling problem and eliminating the need for any manually annotated data or explicit schema alignment. Furthermore, our experiments on multiple domains show that this approach results in significantly better slot-filling performance when compared to using only in-domain data, especially in the low data regime.


 DOI: 10.21437/Interspeech.2017-518

Cite as: Bapna, A., Tür, G., Hakkani-Tür, D., Heck, L. (2017) Towards Zero-Shot Frame Semantic Parsing for Domain Scaling. Proc. Interspeech 2017, 2476-2480, DOI: 10.21437/Interspeech.2017-518.


@inproceedings{Bapna2017,
  author={Ankur Bapna and Gokhan Tür and Dilek Hakkani-Tür and Larry Heck},
  title={Towards Zero-Shot Frame Semantic Parsing for Domain Scaling},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2476--2480},
  doi={10.21437/Interspeech.2017-518},
  url={http://dx.doi.org/10.21437/Interspeech.2017-518}
}