End-to-End Speech Translation with Knowledge Distillation

Yuchen Liu, Hao Xiong, Jiajun Zhang, Zhongjun He, Hua Wu, Haifeng Wang, Chengqing Zong

End-to-end speech translation (ST), which directly translates from source language speech into target language text, has attracted intensive attentions in recent years. Compared to conventional pipeline systems, end-to-end ST model has potential benefits of lower latency, smaller model size and less error propagation. However, it is notoriously difficult to implement such model which combines automatic speech recognition (ASR) and machine translation (MT) together. In this paper, we propose a knowledge distillation approach to improve ST by transferring the knowledge from text translation. Specifically, we first train a text translation model, regarded as the teacher model, and then ST model is trained to learn the output probabilities of teacher model through knowledge distillation. Experiments on English-French Augmented LibriSpeech and English-Chinese TED corpus show that end-to-end ST is possible to implement on both similar and dissimilar language pairs. In addition, with the instruction of the teacher model, end-to-end ST model can gain significant improvements by over 3.5 BLEU points.

 DOI: 10.21437/Interspeech.2019-2582

Cite as: Liu, Y., Xiong, H., Zhang, J., He, Z., Wu, H., Wang, H., Zong, C. (2019) End-to-End Speech Translation with Knowledge Distillation. Proc. Interspeech 2019, 1128-1132, DOI: 10.21437/Interspeech.2019-2582.

  author={Yuchen Liu and Hao Xiong and Jiajun Zhang and Zhongjun He and Hua Wu and Haifeng Wang and Chengqing Zong},
  title={{End-to-End Speech Translation with Knowledge Distillation}},
  booktitle={Proc. Interspeech 2019},