Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional RNN-LSTM

Dilek Hakkani-Tür, Gokhan Tur, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, Ye-Yi Wang

Sequence-to-sequence deep learning has recently emerged as a new paradigm in supervised learning for spoken language understanding. However, most of the previous studies explored this framework for building single domain models for each task, such as slot filling or domain classification, comparing deep learning based approaches with conventional ones like conditional random fields. This paper proposes a holistic multi-domain, multi-task (i.e. slot filling, domain and intent detection) modeling approach to estimate complete semantic frames for all user utterances addressed to a conversational system, demonstrating the distinctive power of deep learning methods, namely bi-directional recurrent neural network (RNN) with long-short term memory (LSTM) cells (RNN-LSTM) to handle such complexity. The contributions of the presented work are three-fold: (i) we propose an RNN-LSTM architecture for joint modeling of slot filling, intent determination, and domain classification; (ii) we build a joint multi-domain model enabling multi-task deep learning where the data from each domain reinforces each other; (iii) we investigate alternative architectures for modeling lexical context in spoken language understanding. In addition to the simplicity of the single model framework, experimental results show the power of such an approach on Microsoft Cortana real user data over alternative methods based on single domain/task deep learning.

DOI: 10.21437/Interspeech.2016-402

Cite as

Hakkani-Tür, D., Tur, G., Celikyilmaz, A., Chen, Y., Gao, J., Deng, L., Wang, Y. (2016) Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional RNN-LSTM. Proc. Interspeech 2016, 715-719.

author={Dilek Hakkani-Tür and Gokhan Tur and Asli Celikyilmaz and Yun-Nung Chen and Jianfeng Gao and Li Deng and Ye-Yi Wang},
title={Multi-Domain Joint Semantic Frame Parsing Using Bi-Directional RNN-LSTM},
booktitle={Interspeech 2016},