Cross-Attention End-to-End ASR for Two-Party Conversations

Suyoun Kim, Siddharth Dalmia, Florian Metze


We present an end-to-end speech recognition model that learns interaction between two speakers based on the turn-changing information. Unlike conventional speech recognition models, our model exploits two speakers history of conversational-context information that spans across multiple turns within an end-to-end framework. Specifically, we propose a speaker-specific cross-attention mechanism that can look at the output of the other speaker side as well as the one of the current speaker for better at recognizing long conversations. We evaluated the models on the Switchboard conversational speech corpus and show that our model outperforms standard end-to-end speech recognition models.


 DOI: 10.21437/Interspeech.2019-3173

Cite as: Kim, S., Dalmia, S., Metze, F. (2019) Cross-Attention End-to-End ASR for Two-Party Conversations. Proc. Interspeech 2019, 4380-4384, DOI: 10.21437/Interspeech.2019-3173.


@inproceedings{Kim2019,
  author={Suyoun Kim and Siddharth Dalmia and Florian Metze},
  title={{Cross-Attention End-to-End ASR for Two-Party Conversations}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4380--4384},
  doi={10.21437/Interspeech.2019-3173},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3173}
}