16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Efficient GPU Implementation of Convolutional Neural Networks for Speech Recognition

Ewout van den Berg (1), Daniel Brand (2), Rajesh Bordawekar (2), Leonid Rachevsky (1), Bhuvana Ramabhadran (1)

(1) IBM Watson, USA
(2) IBM T.J. Watson Research Center, USA

Deep learning has enjoyed tremendous success in speech recognition in recent years. Despite their widespread use, training large and complex architectures remains very time consuming. A prime example of this are convolutional neural networks (CNNs), which have provided state-of-the-art results, but are also among the most computationally intensive networks to train. In this paper, we study four different methods for GPU acceleration of CNNs: a native implementation using cuBLAS, two implementations based on NVIDIA's recently released deep-learning cuDNN library, and an implementation based on cuFFT. We analyze the performance of each of these approaches on the forward operation, the gradient computation, and the backward propagation. The overall best performance is obtained using the custom native implementation, which was found to be up to 6.9 times faster than cuDNN. The paper concludes with results on the end-to-end training speed of our CNN network on an LVCSR task.

Full Paper

Bibliographic reference.  Berg, Ewout van den / Brand, Daniel / Bordawekar, Rajesh / Rachevsky, Leonid / Ramabhadran, Bhuvana (2015): "Efficient GPU implementation of convolutional neural networks for speech recognition", In INTERSPEECH-2015, 1483-1487.