Vectorized Beam Search for CTC-Attention-Based Speech Recognition

Hiroshi Seki, Takaaki Hori, Shinji Watanabe, Niko Moritz, Jonathan Le Roux


This paper investigates efficient beam search techniques for end-to-end automatic speech recognition (ASR) with attention-based encoder-decoder architecture. We accelerate the decoding process by vectorizing multiple hypotheses during the beam search, where we replace the score accumulation steps for each hypothesis with vector-matrix operations for the vectorized hypotheses. This modification allows us to take advantage of the parallel computing capabilities of multi-core CPUs and GPUs, resulting in significant speedups and also enabling us to process multiple utterances in a batch simultaneously. Moreover, we extend the decoding method to incorporate a recurrent neural network language model (RNNLM) and connectionist temporal classification (CTC) scores, which typically improve ASR accuracy but have not been investigated for the use of such parallelized decoding algorithms. Experiments with LibriSpeech and Corpus of Spontaneous Japanese datasets have demonstrated that the vectorized beam search achieves 1.8× speedup on a CPU and 33× speedup on a GPU compared with the original CPU implementation. When using joint CTC/attention decoding with an RNNLM, we also achieved 11× speedup on a GPU while maintaining the benefits of CTC and RNNLM. With these benefits, we achieved almost real-time processing with a small latency of 0.1× real-time without streaming search process.


 DOI: 10.21437/Interspeech.2019-2860

Cite as: Seki, H., Hori, T., Watanabe, S., Moritz, N., Roux, J.L. (2019) Vectorized Beam Search for CTC-Attention-Based Speech Recognition. Proc. Interspeech 2019, 3825-3829, DOI: 10.21437/Interspeech.2019-2860.


@inproceedings{Seki2019,
  author={Hiroshi Seki and Takaaki Hori and Shinji Watanabe and Niko Moritz and Jonathan Le Roux},
  title={{Vectorized Beam Search for CTC-Attention-Based Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3825--3829},
  doi={10.21437/Interspeech.2019-2860},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2860}
}