ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Speed up Training with Variable Length Inputs by Efficient Batching Strategies

Zhenhao Ge, Lakshmish Kaushik, Masanori Omote, Saket Kumar

In the model training with neural networks, although the model performance is always the first priority to optimize, training efficiency also plays an important role in model deployment. There are many ways to speed up training with minimal performance loss, such as training with more GPUs, or with mixed precisions, optimizing training parameters, or making features more compact but more representable. Since mini-batch training is now the go-to approach for many machine learning tasks, minimizing the zero-padding to incorporate samples of different lengths into one batch, is an alternative approach to save training time. Here we propose a batching strategy based on semi-sorted samples, with dynamic batch sizes and batch randomization. By replacing the random batching with the proposed batching strategies, it saves more than 40% training time without compromising performance in training seq2seq neural text-to-speech models based on the Tacotron framework. We also compare it with two other batching strategies and show it performs similarly in terms of saving time and maintaining performance, but with a simpler concept and a smoother tuning parameter to balance between zero-padding and randomness level.


doi: 10.21437/Interspeech.2021-2100

Cite as: Ge, Z., Kaushik, L., Omote, M., Kumar, S. (2021) Speed up Training with Variable Length Inputs by Efficient Batching Strategies. Proc. Interspeech 2021, 156-160, doi: 10.21437/Interspeech.2021-2100

@inproceedings{ge21_interspeech,
  author={Zhenhao Ge and Lakshmish Kaushik and Masanori Omote and Saket Kumar},
  title={{Speed up Training with Variable Length Inputs by Efficient Batching Strategies}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={156--160},
  doi={10.21437/Interspeech.2021-2100}
}