ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning

Łukasz Dudziak, Mohamed S. Abdelfattah, Ravichander Vipperla, Stefanos Laskaridis, Nicholas D. Lane


End-to-end automatic speech recognition (ASR) models are increasingly large and complex to achieve the best possible accuracy. In this paper, we build an AutoML system that uses reinforcement learning (RL) to optimize the per-layer compression ratios when applied to a state-of-the-art attention based end-to-end ASR model composed of several LSTM layers. We use singular value decomposition (SVD) low-rank matrix factorization as the compression method. For our RL-based AutoML system, we focus on practical considerations such as the choice of the reward/punishment functions, the formation of an effective search space, and the creation of a representative but small data set for quick evaluation between search steps. Finally, we present accuracy results on LibriSpeech of the model compressed by our AutoML system, and we compare it to manually-compressed models. Our results show that in the absence of retraining our RL-based search is an effective and practical method to compress a production-grade ASR system. When retraining is possible, we show that our AutoML system can select better highly-compressed seed models compared to manually hand-crafted rank selection, thus allowing for more compression than previously possible.


 DOI: 10.21437/Interspeech.2019-2811

Cite as: Dudziak, Ł., Abdelfattah, M.S., Vipperla, R., Laskaridis, S., Lane, N.D. (2019) ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning. Proc. Interspeech 2019, 2235-2239, DOI: 10.21437/Interspeech.2019-2811.


@inproceedings{Dudziak2019,
  author={Łukasz Dudziak and Mohamed S. Abdelfattah and Ravichander Vipperla and Stefanos Laskaridis and Nicholas D. Lane},
  title={{ShrinkML: End-to-End ASR Model Compression Using Reinforcement Learning}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2235--2239},
  doi={10.21437/Interspeech.2019-2811},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2811}
}