Effectiveness of Single-Channel BLSTM Enhancement for Language Identification

Peter Sibbern Frederiksen, Jesús Villalba, Shinji Watanabe, Zheng-Hua Tan, Najim Dehak


This paper proposes to apply deep neural network (DNN)-based single-channel speech enhancement (SE) to language identification. The 2017 language recognition evaluation (LRE17) introduced noisy audios from videos, in addition to the telephone conversation from past challenges. Because of that, adapting models from telephone speech to noisy speech from the video domain was required to obtain optimum performance. However, such adaptation requires knowledge of the audio domain and availability of in-domain data. Instead of adaptation, we propose to use a speech enhancement step to clean up the noisy audio as preprocessing for language identification. We used a bi-directional long short-term memory (BLSTM) neural network, which given log-Mel noisy features predicts a spectral mask indicating how clean each time-frequency bin is. The noisy spectrogram is multiplied by this predicted mask to obtain the enhanced magnitude spectrogram and it is transformed back into the time domain by using the unaltered noisy speech phase. The experiments show significant improvement to language identification of noisy speech, for systems with and without domain adaptation, while preserving the identification performance in the telephone audio domain. In the best adapted state-of-the-art bottleneck i-vector system the relative improvement is 11.3% for noisy speech.


 DOI: 10.21437/Interspeech.2018-2458

Cite as: Frederiksen, P.S., Villalba, J., Watanabe, S., Tan, Z., Dehak, N. (2018) Effectiveness of Single-Channel BLSTM Enhancement for Language Identification. Proc. Interspeech 2018, 1823-1827, DOI: 10.21437/Interspeech.2018-2458.


@inproceedings{Frederiksen2018,
  author={Peter Sibbern Frederiksen and Jesús Villalba and Shinji Watanabe and Zheng-Hua Tan and Najim Dehak},
  title={Effectiveness of Single-Channel BLSTM Enhancement for Language Identification},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1823--1827},
  doi={10.21437/Interspeech.2018-2458},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2458}
}