Enhancing Monotonic Multihead Attention for Streaming ASR

Hirofumi Inaguma, Masato Mimura, Tatsuya Kawahara


We investigate a monotonic multihead attention (MMA) by extending hard monotonic attention to Transformer-based automatic speech recognition (ASR) for online streaming applications. For streaming inference, all monotonic attention (MA) heads should learn proper alignments because the next token is not generated until all heads detect the corresponding token boundaries. However, we found not all MA heads learn alignments with a naïve implementation. To encourage every head to learn alignments properly, we propose HeadDrop regularization by masking out a part of heads stochastically during training. Furthermore, we propose to prune redundant heads to improve consensus among heads for boundary detection and prevent delayed token generation caused by such heads. Chunkwise attention on each MA head is extended to the multihead counterpart. Finally, we propose head-synchronous beam search decoding to guarantee stable streaming inference.


 DOI: 10.21437/Interspeech.2020-1780

Cite as: Inaguma, H., Mimura, M., Kawahara, T. (2020) Enhancing Monotonic Multihead Attention for Streaming ASR. Proc. Interspeech 2020, 2137-2141, DOI: 10.21437/Interspeech.2020-1780.


@inproceedings{Inaguma2020,
  author={Hirofumi Inaguma and Masato Mimura and Tatsuya Kawahara},
  title={{Enhancing Monotonic Multihead Attention for Streaming ASR}},
  year=2020,
  booktitle={Proc. Interspeech 2020},
  pages={2137--2141},
  doi={10.21437/Interspeech.2020-1780},
  url={http://dx.doi.org/10.21437/Interspeech.2020-1780}
}