INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

A Study of the Recurrent Neural Network Encoder-Decoder for Large Vocabulary Speech Recognition

Liang Lu (1), Xingxing Zhang (1), Kyunghyun Cho (2), Steve Renals (1)

(1) University of Edinburgh, UK
(2) Université de Montréal, Canada

Deep neural networks have advanced the state-of-the-art in automatic speech recognition, when combined with hidden Markov models (HMMs). Recently there has been interest in using systems based on recurrent neural networks (RNNs) to perform sequence modelling directly, without the requirement of an HMM superstructure. In this paper, we study the RNN encoder-decoder approach for large vocabulary end-to-end speech recognition, whereby an encoder transforms a sequence of acoustic vectors into a sequence of feature representations, from which a decoder recovers a sequence of words. We investigated this approach on the Switchboard corpus using a training set of around 300 hours of transcribed audio data. Without the use of an explicit language model or pronunciation lexicon, we achieved promising recognition accuracy, demonstrating that this approach warrants further investigation.

Full Paper

Bibliographic reference.  Lu, Liang / Zhang, Xingxing / Cho, Kyunghyun / Renals, Steve (2015): "A study of the recurrent neural network encoder-decoder for large vocabulary speech recognition", In INTERSPEECH-2015, 3249-3253.