15th Annual Conference of the International Speech Communication Association

September 14-18, 2014

Robust Speech Recognition Using Long Short-Term Memory Recurrent Neural Networks for Hybrid Acoustic Modelling

Jürgen T. Geiger, Zixing Zhang, Felix Weninger, Björn Schuller, Gerhard Rigoll

Technische Universität München, Germany

One method to achieve robust speech recognition in adverse conditions including noise and reverberation is to employ acoustic modelling techniques involving neural networks. Using long short-term memory (LSTM) recurrent neural networks proved to be efficient for this task in a setup for phoneme prediction in a multi-stream GMM-HMM framework. These networks exploit a self-learnt amount of temporal context, which makes them especially suited for a noisy speech recognition task. One shortcoming of this approach is the necessity of a GMM acoustic model in the multi-stream framework. Furthermore, potential modelling power of the network is lost when predicting phonemes, compared to the classical hybrid setup where the network predicts HMM states. In this work, we propose to use LSTM networks in a hybrid HMM setup, in order to overcome these drawbacks. Experiments are performed using the medium-vocabulary recognition track of the 2nd CHiME challenge, containing speech utterances in a reverberant and noisy environment. A comparison of different network topologies for phoneme or state prediction used either in the hybrid or double-stream setup shows that state prediction networks perform better than networks predicting phonemes, leading to state-of-the-art results for this database.

Full Paper

Bibliographic reference.  Geiger, Jürgen T. / Zhang, Zixing / Weninger, Felix / Schuller, Björn / Rigoll, Gerhard (2014): "Robust speech recognition using long short-term memory recurrent neural networks for hybrid acoustic modelling", In INTERSPEECH-2014, 631-635.