Memory Time Span in LSTMs for Multi-Speaker Source Separation

Jeroen Zegers, Hugo van Hamme


With deep learning approaches becoming state-of-the-art in many speech (as well as non-speech) related machine learning tasks, efforts are being taken to delve into the neural networks which are often considered as a black box. In this paper it is analyzed how recurrent neural network (RNNs) cope with temporal dependencies by determining the relevant memory time span in a long short-term memory (LSTM) cell. This is done by leaking the state variable with a controlled lifetime and evaluating the task performance. This technique can be used for any task to estimate the time span the LSTM exploits in that specific scenario. The focus in this paper is on the task of separating speakers from overlapping speech. We discern two effects: A long term effect, probably due to speaker characterization and a short term effect, probably exploiting phone-size formant tracks.


 DOI: 10.21437/Interspeech.2018-2082

Cite as: Zegers, J., van Hamme, H. (2018) Memory Time Span in LSTMs for Multi-Speaker Source Separation. Proc. Interspeech 2018, 1477-1481, DOI: 10.21437/Interspeech.2018-2082.


@inproceedings{Zegers2018,
  author={Jeroen Zegers and Hugo {van Hamme}},
  title={Memory Time Span in LSTMs for Multi-Speaker Source Separation},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1477--1481},
  doi={10.21437/Interspeech.2018-2082},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2082}
}