Endpoint Detection Using Grid Long Short-Term Memory Networks for Streaming Speech Recognition

Shuo-Yiin Chang, Bo Li, Tara N. Sainath, Gabor Simko, Carolina Parada


The task of endpointing is to determine when the user has finished speaking. This is important for interactive speech applications such as voice search and Google Home. In this paper, we propose a GLDNN-based (grid long short-term memory deep neural network) endpointer model and show that it provides significant improvements over a state-of-the-art CLDNN (convolutional, long short-term memory, deep neural network) model. Specifically, we replace the convolution layer in the CLDNN with a grid LSTM layer that models both spectral and temporal variations through recurrent connections. Results show that the GLDNN achieves 32% relative improvement in false alarm rate at a fixed false reject rate of 2%, and reduces median latency by 11%. We also include detailed experiments investigating why grid LSTMs offer better performance than convolution layers. Analysis reveals that the recurrent connection along the frequency axis is an important factor that greatly contributes to the performance of grid LSTMs, especially in the presence of background noise. Finally, we also show that multichannel input further increases robustness to background speech. Overall, we achieve 16% (100 ms) endpointer latency improvement relative to our previous best model on a Voice Search Task.


 DOI: 10.21437/Interspeech.2017-284

Cite as: Chang, S., Li, B., Sainath, T.N., Simko, G., Parada, C. (2017) Endpoint Detection Using Grid Long Short-Term Memory Networks for Streaming Speech Recognition. Proc. Interspeech 2017, 3812-3816, DOI: 10.21437/Interspeech.2017-284.


@inproceedings{Chang2017,
  author={Shuo-Yiin Chang and Bo Li and Tara N. Sainath and Gabor Simko and Carolina Parada},
  title={Endpoint Detection Using Grid Long Short-Term Memory Networks for Streaming Speech Recognition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={3812--3816},
  doi={10.21437/Interspeech.2017-284},
  url={http://dx.doi.org/10.21437/Interspeech.2017-284}
}