ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Augmenting Slot Values and Contexts for Spoken Language Understanding with Pretrained Models

Haitao Lin, Lu Xiang, Yu Zhou, Jiajun Zhang, Chengqing Zong

Spoken Language Understanding (SLU) is one essential step in building a dialogue system. Due to the expensive cost of obtaining the labeled data, SLU suffers from the data scarcity problem. Therefore, in this paper, we focus on data augmentation for slot filling task in SLU. To achieve that, we aim at generating more diverse data based on existing data. Specifically, we try to exploit the latent language knowledge from pretrained language models by finetuning them. We propose two strategies for finetuning process: value-based and context-based augmentation. Experimental results on two public SLU datasets have shown that compared with existing data augmentation methods, our proposed method can generate more diverse sentences and significantly improve the performance on SLU.


doi: 10.21437/Interspeech.2021-55

Cite as: Lin, H., Xiang, L., Zhou, Y., Zhang, J., Zong, C. (2021) Augmenting Slot Values and Contexts for Spoken Language Understanding with Pretrained Models. Proc. Interspeech 2021, 4703-4707, doi: 10.21437/Interspeech.2021-55

@inproceedings{lin21k_interspeech,
  author={Haitao Lin and Lu Xiang and Yu Zhou and Jiajun Zhang and Chengqing Zong},
  title={{Augmenting Slot Values and Contexts for Spoken Language Understanding with Pretrained Models}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4703--4707},
  doi={10.21437/Interspeech.2021-55}
}