Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition

Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee


In this paper, a novel architecture for a deep recurrent neural network, residual LSTM is introduced. A plain LSTM has an internal memory cell that can learn long term dependencies of sequential data. It also provides a temporal shortcut path to avoid vanishing or exploding gradients in the temporal domain. The residual LSTM provides an additional spatial shortcut path from lower layers for efficient training of deep networks with multiple LSTM layers. Compared with the previous work, highway LSTM, residual LSTM separates a spatial shortcut path with temporal one by using output layers, which can help to avoid a conflict between spatial and temporal-domain gradient flows. Furthermore, residual LSTM reuses the output projection matrix and the output gate of LSTM to control the spatial information flow instead of additional gate networks, which effectively reduces more than 10% of network parameters. An experiment for distant speech recognition on the AMI SDM corpus shows that 10-layer plain and highway LSTM networks presented 13.7% and 6.2% increase in WER over 3-layer baselines, respectively. On the contrary, 10-layer residual LSTM networks provided the lowest WER 41.0%, which corresponds to 3.3% and 2.8% WER reduction over plain and highway LSTM networks, respectively.


 DOI: 10.21437/Interspeech.2017-477

Cite as: Kim, J., El-Khamy, M., Lee, J. (2017) Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition. Proc. Interspeech 2017, 1591-1595, DOI: 10.21437/Interspeech.2017-477.


@inproceedings{Kim2017,
  author={Jaeyoung Kim and Mostafa El-Khamy and Jungwon Lee},
  title={Residual LSTM: Design of a Deep Recurrent Architecture for Distant Speech Recognition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1591--1595},
  doi={10.21437/Interspeech.2017-477},
  url={http://dx.doi.org/10.21437/Interspeech.2017-477}
}