INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Efficient Learning for Spoken Language Understanding Tasks with Word Embedding Based Pre-Training

Yi Luan (1), Shinji Watanabe (2), Bret Harsham (2)

(1) University of Washington, USA
(2) MERL, USA

Spoken language understanding (SLU) tasks such as goal estimation and intention identification from user's commands are essential components in spoken dialog systems. In recent years, neural network approaches have shown great success in various SLU tasks. However, one major difficulty of SLU is that the annotation of collected data can be expensive. Often this results in insufficient data being available for a task. The performance of a neural network trained in low resource conditions is usually inferior because of over-training. To improve the performance, this paper investigates the use of unsupervised training methods with large-scale corpora based on word embedding and latent topic models to pre-train the SLU networks. In order to capture long-term characteristics over the entire dialog, we propose a novel Recurrent Neural Network (RNN) architecture. The proposed RNN uses two sub-networks to model the different time scales represented by word and turn sequences. The combination of pre-training and RNN gives us a 18% relative error reduction compared to a baseline system.

Full Paper

Bibliographic reference.  Luan, Yi / Watanabe, Shinji / Harsham, Bret (2015): "Efficient learning for spoken language understanding tasks with word embedding based pre-training", In INTERSPEECH-2015, 1398-1402.