15th Annual Conference of the International Speech Communication Association

September 14-18, 2014

Distributed Asynchronous Optimization of Convolutional Neural Networks

William Chan, Ian Lane

Carnegie Mellon University, USA

Recently, deep Convolutional Neural Networks have been shown to outperform Deep Neural Networks for acoustic modelling, producing state-of-the-art accuracy in speech recognition tasks. Convolutional models provide increased model robustness through the usage of pooling invariance and weight sharing across spectrum and time. However, training convolutional models is a very computationally expensive optimization procedure, especially when combined with large training corpora. In this paper, we present a novel algorithm for scalable training of deep Convolutional Neural Networks across multiple GPUs. Our distributed asynchronous stochastic gradient descent algorithm incorporates sparse gradients, momentum and gradient decay to accelerate the training of these networks. Our approach is stable, neither requiring warm-starting or excessively large minibatches. Our proposed approach enables convolutional models to be efficiently trained across multiple GPUs, enabling a model to be scaled asynchronously across 5 GPU workers with ˜68% efficiency.

Full Paper

Bibliographic reference.  Chan, William / Lane, Ian (2014): "Distributed asynchronous optimization of convolutional neural networks", In INTERSPEECH-2014, 1073-1077.