Zero-Shot Learning Across Heterogeneous Overlapping Domains

Anjishnu Kumar, Pavankumar Reddy Muddireddy, Markus Dreyer, Björn Hoffmeister


We present a zero-shot learning approach for text classification, predicting which natural language understanding domain can handle a given utterance. Our approach can predict domains at runtime that did not exist at training time. We achieve this extensibility by learning to project utterances and domains into the same embedding space while generating each domain-specific embedding from a set of attributes that characterize the domain. Our model is a neural network trained via ranking loss. We evaluate the performance of this zero-shot approach on a subset of a virtual assistant’s third-party domains and show the effectiveness of the technique on new domains not observed during training. We compare to generative baselines and show that our approach requires less storage and performs better on new domains.


 DOI: 10.21437/Interspeech.2017-516

Cite as: Kumar, A., Muddireddy, P.R., Dreyer, M., Hoffmeister, B. (2017) Zero-Shot Learning Across Heterogeneous Overlapping Domains. Proc. Interspeech 2017, 2914-2918, DOI: 10.21437/Interspeech.2017-516.


@inproceedings{Kumar2017,
  author={Anjishnu Kumar and Pavankumar Reddy Muddireddy and Markus Dreyer and Björn Hoffmeister},
  title={Zero-Shot Learning Across Heterogeneous Overlapping Domains},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2914--2918},
  doi={10.21437/Interspeech.2017-516},
  url={http://dx.doi.org/10.21437/Interspeech.2017-516}
}