Gated Recurrent Unit Based Acoustic Modeling with Future Context

Jie Li, Xiaorui Wang, Yuanyuan Zhao, Yan Li

The use of future contextual information is typically shown to be helpful for acoustic modeling. However, for the recurrent neural network (RNN), it's not so easy to model the future temporal context effectively, meanwhile keep lower model latency. In this paper, we attempt to design a RNN acoustic model that being capable of utilizing the future context effectively and directly, with the model latency and computation cost as low as possible. The proposed model is based on the minimal gated recurrent unit (mGRU) with an input projection layer inserted in it. Two context modules, temporal encoding and temporal convolution, are specifically designed for this architecture to model the future context. Experimental results on the Switchboard task and an internal Mandarin ASR task show that, the proposed model performs much better than long short-term memory (LSTM) and mGRU models, whereas enables online decoding with a maximum latency of 170 ms. This model even outperforms a very strong baseline, TDNN-LSTM, with smaller model latency and almost half less parameters.

 DOI: 10.21437/Interspeech.2018-1544

Cite as: Li, J., Wang, X., Zhao, Y., Li, Y. (2018) Gated Recurrent Unit Based Acoustic Modeling with Future Context. Proc. Interspeech 2018, 1788-1792, DOI: 10.21437/Interspeech.2018-1544.

  author={Jie Li and Xiaorui Wang and Yuanyuan Zhao and Yan Li},
  title={Gated Recurrent Unit Based Acoustic Modeling with Future Context},
  booktitle={Proc. Interspeech 2018},