ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

CoDERT: Distilling Encoder Representations with Co-Learning for Transducer-Based Speech Recognition

Rupak Vignesh Swaminathan, Brian King, Grant P. Strimel, Jasha Droppo, Athanasios Mouchtaris

We propose a simple yet effective method to compress an RNN-Transducer (RNN-T) through the well-known knowledge distillation paradigm. We show that the transducer’s encoder outputs naturally have a high entropy and contain rich information about acoustically similar word-piece confusions. This rich information is suppressed when combined with the lower entropy decoder outputs to produce the joint network logits. Consequently, we introduce an auxiliary loss to distill the encoder logits from a teacher transducer’s encoder, and explore training strategies where this encoder distillation works effectively. We find that tandem training of teacher and student encoders with an inplace encoder distillation outperforms the use of a pre-trained and static teacher transducer. We also report an interesting phenomenon we refer to as implicit distillation, that occurs when the teacher and student encoders share the same decoder. Our experiments show 5.37–8.4% relative word error rate reductions (WERR) on in-house test sets, and 5.05–6.18% relative WERRs on LibriSpeech test sets.


doi: 10.21437/Interspeech.2021-797

Cite as: Swaminathan, R.V., King, B., Strimel, G.P., Droppo, J., Mouchtaris, A. (2021) CoDERT: Distilling Encoder Representations with Co-Learning for Transducer-Based Speech Recognition. Proc. Interspeech 2021, 4543-4547, doi: 10.21437/Interspeech.2021-797

@inproceedings{swaminathan21_interspeech,
  author={Rupak Vignesh Swaminathan and Brian King and Grant P. Strimel and Jasha Droppo and Athanasios Mouchtaris},
  title={{CoDERT: Distilling Encoder Representations with Co-Learning for Transducer-Based Speech Recognition}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4543--4547},
  doi={10.21437/Interspeech.2021-797}
}