ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Advanced Long-Context End-to-End Speech Recognition Using Context-Expanded Transformers

Takaaki Hori, Niko Moritz, Chiori Hori, Jonathan Le Roux

This paper addresses end-to-end automatic speech recognition (ASR) for long audio recordings such as lecture and conversational speeches. Most end-to-end ASR models are designed to recognize independent utterances, but contextual information (e.g., speaker or topic) over multiple utterances is known to be useful for ASR. In our prior work, we proposed a context-expanded Transformer that accepts multiple consecutive utterances at the same time and predicts an output sequence for the last utterance, achieving 5–15% relative error reduction from utterance-based baselines in lecture and conversational ASR benchmarks. Although the results have shown remarkable performance gain, there is still potential to further improve the model architecture and the decoding process. In this paper, we extend our prior work by (1) introducing the Conformer architecture to further improve the accuracy, (2) accelerating the decoding process with a novel activation recycling technique, and (3) enabling streaming decoding with triggered attention. We demonstrate that the extended Transformer provides state-of-the-art end-to-end ASR performance, obtaining a 17.3% character error rate for the HKUST dataset and 12.0%/6.3% word error rates for the Switchboard-300 Eval2000 CallHome/Switchboard test sets. The new decoding method reduces decoding time by more than 50% and further enables streaming ASR with limited accuracy degradation.

doi: 10.21437/Interspeech.2021-1643

Cite as: Hori, T., Moritz, N., Hori, C., Roux, J.L. (2021) Advanced Long-Context End-to-End Speech Recognition Using Context-Expanded Transformers. Proc. Interspeech 2021, 2097-2101, doi: 10.21437/Interspeech.2021-1643

  author={Takaaki Hori and Niko Moritz and Chiori Hori and Jonathan Le Roux},
  title={{Advanced Long-Context End-to-End Speech Recognition Using Context-Expanded Transformers}},
  booktitle={Proc. Interspeech 2021},