INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Autoencoder Based Multi-Stream Combination for Noise Robust Speech Recognition

Sri Harish Mallidi (1), Tetsuji Ogawa (2), Karel Veselý (3), Phani S. Nidadavolu (1), Hynek Hermansky (1)

(1) Johns Hopkins University, USA
(2) Waseda University, Japan
(3) Brno University of Technology, Czech Republic

Performances of automatic speech recognition (ASR) systems degrade rapidly when there is a mismatch between train and test acoustic conditions. Performance can be improved using a multi-stream framework, which involves combining posterior probabilities from several classifiers (often deep neural networks (DNNs)) trained on different features/streams. Knowledge about the confidence of each of these classifiers on a noisy test utterance can help in devising better techniques for posterior combination than simple sum and product rules [1]. In this work, we propose to use autoencoders which are multi-layer feed forward neural networks, for estimating this confidence measure. During the training phase, for each stream, an autocoder is trained on TANDEM features extracted from the corresponding DNN. On employing the autoencoder during the testing phase, we show that the reconstruction error of the auto-encoder is correlated to the robustness of the corresponding stream. These error estimates are then used as confidence measures to combine the posterior probabilities generated from each of the streams. Experiments on Aurora4 and BABEL databases indicate significant improvements, especially in the scenario of mismatch between train and test acoustic conditions.

Full Paper

Bibliographic reference.  Mallidi, Sri Harish / Ogawa, Tetsuji / Veselý, Karel / Nidadavolu, Phani S. / Hermansky, Hynek (2015): "Autoencoder based multi-stream combination for noise robust speech recognition", In INTERSPEECH-2015, 3551-3555.