ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Towards Better Decoding and Language Model Integration in Sequence to Sequence Models

Jan Chorowski, Navdeep Jaitly

The recently proposed Sequence-to-Sequence (seq2seq) framework advocates replacing complex data processing pipelines, such as an entire automatic speech recognition system, with a single neural network trained in an end-to-end fashion. In this contribution, we analyse an attention-based seq2seq speech recognition system that directly transcribes recordings into characters. We observe two shortcomings: overconfidence in its predictions and a tendency to produce incomplete transcriptions when language models are used. We propose practical solutions to both problems achieving competitive speaker independent word error rates on the Wall Street Journal dataset: without separate language models we reach 10.6% WER, while together with a trigram language model, we reach 6.7% WER, a state-of-the-art result for HMM-free methods.


doi: 10.21437/Interspeech.2017-343

Cite as: Chorowski, J., Jaitly, N. (2017) Towards Better Decoding and Language Model Integration in Sequence to Sequence Models. Proc. Interspeech 2017, 523-527, doi: 10.21437/Interspeech.2017-343

@inproceedings{chorowski17_interspeech,
  author={Jan Chorowski and Navdeep Jaitly},
  title={{Towards Better Decoding and Language Model Integration in Sequence to Sequence Models}},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={523--527},
  doi={10.21437/Interspeech.2017-343}
}