Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition

Taesup Kim, Inchul Song, Yoshua Bengio


Layer normalization is a recently introduced technique for normalizing the activities of neurons in deep neural networks to improve the training speed and stability. In this paper, we introduce a new layer normalization technique called Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling in speech recognition. By dynamically generating the scaling and shifting parameters in layer normalization, DLN adapts neural acoustic models to the acoustic variability arising from various factors such as speakers, channel noises, and environments. Unlike other adaptive acoustic models, our proposed approach does not require additional adaptation data or speaker information such as i-vectors. Moreover, the model size is fixed as it dynamically generates adaptation parameters. We apply our proposed DLN to deep bidirectional LSTM acoustic models and evaluate them on two benchmark datasets for large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The experimental results show that our DLN improves neural acoustic models in terms of transcription accuracy by dynamically adapting to various speakers and environments.


 DOI: 10.21437/Interspeech.2017-556

Cite as: Kim, T., Song, I., Bengio, Y. (2017) Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition. Proc. Interspeech 2017, 2411-2415, DOI: 10.21437/Interspeech.2017-556.


@inproceedings{Kim2017,
  author={Taesup Kim and Inchul Song and Yoshua Bengio},
  title={Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2411--2415},
  doi={10.21437/Interspeech.2017-556},
  url={http://dx.doi.org/10.21437/Interspeech.2017-556}
}