Listen, Attend, Spell and Adapt: Speaker Adapted Sequence-to-Sequence ASR

Felix Weninger, Jesús Andrés-Ferrer, Xinwei Li, Puming Zhan


Sequence-to-sequence (seq2seq) based ASR systems have shown state-of-the-art performances while having clear advantages in terms of simplicity. However, comparisons are mostly done on speaker independent (SI) ASR systems, though speaker adapted conventional systems are commonly used in practice for improving robustness to speaker and environment variations. In this paper, we apply speaker adaptation to seq2seq models with the goal of matching the performance of conventional ASR adaptation. Specifically, we investigate Kullback-Leibler divergence (KLD) as well as Linear Hidden Network (LHN) based adaptation for seq2seq ASR, using different amounts (up to 20 hours) of adaptation data per speaker. Our SI models are trained on large amounts of dictation data and achieve state-of-the-art results. We obtained 25% relative word error rate (WER) improvement with KLD adaptation of the seq2seq model vs. 18.7% gain from acoustic model adaptation in the conventional system. We also show that the WER of the seq2seq model decreases log-linearly with the amount of adaptation data. Finally, we analyze adaptation based on the minimum WER criterion and adapting the language model (LM) for score fusion with the speaker adapted seq2seq model, which result in further improvements of the seq2seq system performance.


 DOI: 10.21437/Interspeech.2019-2719

Cite as: Weninger, F., Andrés-Ferrer, J., Li, X., Zhan, P. (2019) Listen, Attend, Spell and Adapt: Speaker Adapted Sequence-to-Sequence ASR. Proc. Interspeech 2019, 3805-3809, DOI: 10.21437/Interspeech.2019-2719.


@inproceedings{Weninger2019,
  author={Felix Weninger and Jesús Andrés-Ferrer and Xinwei Li and Puming Zhan},
  title={{Listen, Attend, Spell and Adapt: Speaker Adapted Sequence-to-Sequence ASR}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3805--3809},
  doi={10.21437/Interspeech.2019-2719},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2719}
}