ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Cross-Modal Transformer-Based Neural Correction Models for Automatic Speech Recognition

Tomohiro Tanaka, Ryo Masumura, Mana Ihori, Akihiko Takashima, Takafumi Moriya, Takanori Ashihara, Shota Orihashi, Naoki Makishima

We propose a cross-modal transformer-based neural correction models that refines the output of an automatic speech recognition (ASR) system so as to exclude ASR errors. Generally, neural correction models are composed of encoder-decoder networks, which can directly model sequence-to-sequence mapping problems. The most successful method is to use both input speech and its ASR output text as the input contexts for the encoder-decoder networks. However, the conventional method cannot take into account the relationships between these two different modal inputs because the input contexts are separately encoded for each modal. To effectively leverage the correlated information between the two different modal inputs, our proposed models encode two different contexts jointly on the basis of cross-modal self-attention using a transformer. We expect that cross-modal self-attention can effectively capture the relationships between two different modals for refining ASR hypotheses. We also introduce a shallow fusion technique to efficiently integrate the first-pass ASR model and our proposed neural correction model. Experiments on Japanese natural language ASR tasks demonstrated that our proposed models achieve better ASR performance than conventional neural correction models.


doi: 10.21437/Interspeech.2021-1992

Cite as: Tanaka, T., Masumura, R., Ihori, M., Takashima, A., Moriya, T., Ashihara, T., Orihashi, S., Makishima, N. (2021) Cross-Modal Transformer-Based Neural Correction Models for Automatic Speech Recognition. Proc. Interspeech 2021, 4059-4063, doi: 10.21437/Interspeech.2021-1992

@inproceedings{tanaka21b_interspeech,
  author={Tomohiro Tanaka and Ryo Masumura and Mana Ihori and Akihiko Takashima and Takafumi Moriya and Takanori Ashihara and Shota Orihashi and Naoki Makishima},
  title={{Cross-Modal Transformer-Based Neural Correction Models for Automatic Speech Recognition}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4059--4063},
  doi={10.21437/Interspeech.2021-1992}
}