ISCA Archive Interspeech 2017
ISCA Archive Interspeech 2017

Efficient Knowledge Distillation from an Ensemble of Teachers

Takashi Fukuda, Masayuki Suzuki, Gakuto Kurata, Samuel Thomas, Jia Cui, Bhuvana Ramabhadran

This paper describes the effectiveness of knowledge distillation using teacher student training for building accurate and compact neural networks. We show that with knowledge distillation, information from multiple acoustic models like very deep VGG networks and Long Short-Term Memory (LSTM) models can be used to train standard convolutional neural network (CNN) acoustic models for a variety of systems requiring a quick turnaround. We examine two strategies to leverage multiple teacher labels for training student models. In the first technique, the weights of the student model are updated by switching teacher labels at the minibatch level. In the second method, student models are trained on multiple streams of information from various teacher distributions via data augmentation. We show that standard CNN acoustic models can achieve comparable recognition accuracy with much smaller number of model parameters compared to teacher VGG and LSTM acoustic models. Additionally we also investigate the effectiveness of using broadband teacher labels as privileged knowledge for training better narrowband acoustic models within this framework. We show the benefit of this simple technique by training narrowband student models with broadband teacher soft labels on the Aurora 4 task.

doi: 10.21437/Interspeech.2017-614

Cite as: Fukuda, T., Suzuki, M., Kurata, G., Thomas, S., Cui, J., Ramabhadran, B. (2017) Efficient Knowledge Distillation from an Ensemble of Teachers. Proc. Interspeech 2017, 3697-3701, doi: 10.21437/Interspeech.2017-614

  author={Takashi Fukuda and Masayuki Suzuki and Gakuto Kurata and Samuel Thomas and Jia Cui and Bhuvana Ramabhadran},
  title={{Efficient Knowledge Distillation from an Ensemble of Teachers}},
  booktitle={Proc. Interspeech 2017},