INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

A Diversity-Penalizing Ensemble Training Method for Deep Learning

Xiaohui Zhang, Daniel Povey, Sanjeev Khudanpur

Johns Hopkins University, USA

A common way to improve the performance of deep learning is to train an ensemble of neural networks and combine them during decoding. However, this is computationally expensive in test time. In this paper, we propose an diversity-penalizing ensemble training (DPET) procedure, which trains an ensemble of DNNs, whose parameters were differently initialized, and penalizes differences between each individual DNN's output and their average output. This way each model learns to emulate the average of the whole ensemble of models, and in test time we can use one arbitrarily chosen member of the ensemble. Experimental results on a variety of speech recognition tasks show that this technique is effective, and gives us most of the WER improvement of the ensemble method while being no more expensive in test time than using a single model.

Full Paper

Bibliographic reference.  Zhang, Xiaohui / Povey, Daniel / Khudanpur, Sanjeev (2015): "A diversity-penalizing ensemble training method for deep learning", In INTERSPEECH-2015, 3590-3594.