ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Knowledge Distillation Based Training of Universal ASR Source Models for Cross-Lingual Transfer

Takashi Fukuda, Samuel Thomas

In this paper we introduce a novel knowledge distillation based framework for training universal source models. In our proposed approach for automatic speech recognition (ASR), multilingual source models are first trained using multiple language-dependent resources before being used to initialize language specific target models in low resource settings. For the proposed source models to be effective in cross-lingual transfer to novel target languages, the training framework encourages the models to perform accurate universal phone classification while ignoring any language-dependent characteristics present in the training data set. These two goals are achieved by applying knowledge distillation to improve the models’ universal phone classification performance along with a shuffling mechanism that alleviates any language specific dependencies that might be learned. The benefits of this proposed technique are demonstrated in several practical settings, where either large amounts or only limited quantities of unbalanced multilingual data resources are available for source model creation. Compared to a conventional knowledge transfer learning method, the proposed approaches achieve a relative WER reduction of 8–10% in streaming ASR settings for various low resource target languages.


doi: 10.21437/Interspeech.2021-796

Cite as: Fukuda, T., Thomas, S. (2021) Knowledge Distillation Based Training of Universal ASR Source Models for Cross-Lingual Transfer. Proc. Interspeech 2021, 3450-3454, doi: 10.21437/Interspeech.2021-796

@inproceedings{fukuda21_interspeech,
  author={Takashi Fukuda and Samuel Thomas},
  title={{Knowledge Distillation Based Training of Universal ASR Source Models for Cross-Lingual Transfer}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={3450--3454},
  doi={10.21437/Interspeech.2021-796}
}