A prevalent and challenging task in spoken language understanding is slot filling. Currently, the best approaches in this domain are based on recurrent neural networks (RNNs). However, in their simplest form, RNNs cannot learn long-term dependencies in the data. In this paper, we propose the use of ClockWork recurrent neural network (CW-RNN) architectures in the slot-filling domain. CW-RNN is a multi-timescale implementation of the simple RNN architecture, which has proven to be powerful since it maintains relatively small model complexity. In addition, CW-RNN exhibits a great ability to model long-term memory inherently. In our experiments on the ATIS benchmark data set, we also evaluate several novel variants of CW-RNN and we find that they significantly outperform simple RNNs and they achieve results among the state-of-the-art, while retaining smaller complexity.
Cite as: Georgiadou, D., Diakoloukas, V., Tsiaras, V., Digalakis, V. (2017) ClockWork-RNN Based Architectures for Slot Filling. Proc. Interspeech 2017, 2481-2485, doi: 10.21437/Interspeech.2017-1075
@inproceedings{georgiadou17_interspeech, author={Despoina Georgiadou and Vassilios Diakoloukas and Vassilios Tsiaras and Vassilios Digalakis}, title={{ClockWork-RNN Based Architectures for Slot Filling}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={2481--2485}, doi={10.21437/Interspeech.2017-1075} }