16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Applying GPGPU to Recurrent Neural Network Language Model Based Fast Network Search in the Real-Time LVCSR

Kyungmin Lee, Chiyoun Park, Ilhwan Kim, Namhoon Kim, Jaewon Lee

Samsung Electronics, Korea

Recurrent Neural Network Language Models (RNNLMs) have started to be used in various fields of speech recognition due to their outstanding performance. However, the high computational complexity of RNNLMs has been a hurdle in applying the RNNLM to a real-time Large Vocabulary Continuous Speech Recognition (LVCSR). In order to accelerate the speed of RNNLM-based network searches during decoding, we apply the General Purpose Graphic Processing Units (GPGPUs). This paper proposes a novel method of applying GPGPUs to RNNLM-based graph traversals. We have achieved our goal by reducing redundant computations on CPUs and amount of transfer between GPGPUs and CPUs. The proposed approach was evaluated on both WSJ corpus and in-house data. Experiments shows that the proposed approach achieves the real-time speed in various circumstances while maintaining the Word Error Rate (WER) to be relatively 10% lower than that of n-gram models.

Full Paper

Bibliographic reference.  Lee, Kyungmin / Park, Chiyoun / Kim, Ilhwan / Kim, Namhoon / Lee, Jaewon (2015): "Applying GPGPU to recurrent neural network language model based fast network search in the real-time LVCSR", In INTERSPEECH-2015, 2102-2106.