Whereas conventional spoken language understanding (SLU) systems map speech to text, and then text to intent, end-to-end SLU systems map speech directly to intent through a single trainable model. Achieving high accuracy with these end-to-end models without a large amount of training data is difficult. We propose a method to reduce the data requirements of end-to-end SLU in which the model is first pre-trained to predict words and phonemes, thus learning good features for SLU. We introduce a new SLU dataset, Fluent Speech Commands, and show that our method improves performance both when the full dataset is used for training and when only a small subset is used. We also describe preliminary experiments to gauge the model’s ability to generalize to new phrases not heard during training.
Cite as: Lugosch, L., Ravanelli, M., Ignoto, P., Tomar, V.S., Bengio, Y. (2019) Speech Model Pre-Training for End-to-End Spoken Language Understanding. Proc. Interspeech 2019, 814-818, doi: 10.21437/Interspeech.2019-2396
@inproceedings{lugosch19_interspeech, author={Loren Lugosch and Mirco Ravanelli and Patrick Ignoto and Vikrant Singh Tomar and Yoshua Bengio}, title={{Speech Model Pre-Training for End-to-End Spoken Language Understanding}}, year=2019, booktitle={Proc. Interspeech 2019}, pages={814--818}, doi={10.21437/Interspeech.2019-2396} }