INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Robust Speech Recognition with Speech Enhanced Deep Neural Networks

Jun Du (1), Qing Wang (1), Tian Gao (1), Yong Xu (1), Li-Rong Dai (1), Chin-Hui Lee (2)

(1) USTC, China
(2) Georgia Institute of Technology, USA

We propose a signal pre-processing front-end to enhance speech based on deep neural networks (DNNs) and use the enhanced speech features directly to train hidden Markov models (HMMs) for robust speech recognition. As a comprehensive study, we examine its effectiveness for different acoustic features, acoustic models, and training-testing combinations. Tested on the Aurora4 task the experimental results indicate that our proposed framework consistently outperform the state-of-the-art speech recognition systems in all evaluation conditions. To our best knowledge, this is the first showcase on the Aurora4 task yielding performance gains by using only an enhancement pre-processor without any adaptation or compensation post-processing on top of the best DNN-HMM system. The word error rate reduction from the baseline system is up to 50% for clean-condition training and 15% for multi-condition training. We believe the system performance could be improved further by incorporating post-processing techniques to work coherently with the proposed enhancement pre-processing scheme.

Full Paper

Bibliographic reference.  Du, Jun / Wang, Qing / Gao, Tian / Xu, Yong / Dai, Li-Rong / Lee, Chin-Hui (2014): "Robust speech recognition with speech enhanced deep neural networks", In INTERSPEECH-2014, 616-620.