Labeled Data Generation with Encoder-Decoder LSTM for Semantic Slot Filling

Gakuto Kurata, Bing Xiang, Bowen Zhou


To train a model for semantic slot filling, manually labeled data in which each word is annotated with a semantic slot label is necessary while manually preparing such data is costly. Starting from a small amount of manually labeled data, we propose a method to generate the labeled data with using the encoder-decoder LSTM. We first train the encoder-decoder LSTM that accepts and generates the same manually labeled data. Then, to generate a wide variety of labeled data, we add perturbations to the vector that encodes the manually labeled data and generate labeled data with the decoder LSTM based on the perturbated encoded vector. We also try to enhance the encoder-decoder LSTM to generate the word sequences and their label sequences separately to obtain new pairs of words and their labels. Through the experiments with the standard ATIS slot filling task, by using the generated data, we obtained improvement in slot filling accuracy over the strong baseline with the NN-based slot filling model.


DOI: 10.21437/Interspeech.2016-727

Cite as

Kurata, G., Xiang, B., Zhou, B. (2016) Labeled Data Generation with Encoder-Decoder LSTM for Semantic Slot Filling. Proc. Interspeech 2016, 725-729.

Bibtex
@inproceedings{Kurata+2016,
author={Gakuto Kurata and Bing Xiang and Bowen Zhou},
title={Labeled Data Generation with Encoder-Decoder LSTM for Semantic Slot Filling},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-727},
url={http://dx.doi.org/10.21437/Interspeech.2016-727},
pages={725--729}
}