INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

A Multi-Region Deep Neural Network Model in Speech Recognition

Jia Cui, George Saon, Bhuvana Ramabhadran, Brian Kingsbury

IBM T.J. Watson Research Center, USA

This work proposes a new architecture for deep neural network training. Instead of having one cascade of fully connected hidden layers between the input features and the target output, the new architecture organizes hidden layers into several regions with each region having its own target. Regions communicate with each other during the training process by connections among intermediate hidden layers to share learned internal representations from their respective targets. They do not have to share the same input features. This paper presents the performance of acoustic models built using this architecture with speaker independent and dependent features. Experimental results are compared with not only the baseline DNN model, but also the ensemble DNN, unfolded RNN and stacked DNN. Experiments on the IARPA sponsored Babel tasks demonstrate improvements ranging from 0.8% to 2.7% absolute reduction in WER.

Full Paper

Bibliographic reference.  Cui, Jia / Saon, George / Ramabhadran, Bhuvana / Kingsbury, Brian (2015): "A multi-region deep neural network model in speech recognition", In INTERSPEECH-2015, 3244-3248.