On Multi-Domain Training and Adaptation of End-to-End RNN Acoustic Models for Distant Speech Recognition

Seyedmahdad Mirsamadi, John H.L. Hansen


Recognition of distant (far-field) speech is a challenge for ASR due to mismatch in recording conditions resulting from room reverberation and environment noise. Given the remarkable learning capacity of deep neural networks, there is increasing interest to address this problem by using a large corpus of reverberant far-field speech to train robust models. In this study, we explore how an end-to-end RNN acoustic model trained on speech from different rooms and acoustic conditions (different domains) achieves robustness to environmental variations. It is shown that the first hidden layer acts as a domain separator, projecting the data from different domains into different subspaces. The subsequent layers then use this encoded domain knowledge to map these features to final representations that are invariant to domain change. This mechanism is closely related to noise-aware or room-aware approaches which append manually-extracted domain signatures to the input features. Additionally, we demonstrate how this understanding of the learning procedure provides useful guidance for model adaptation to new acoustic conditions. We present results based on AMI corpus to demonstrate the propagation of domain information in a deep RNN, and perform recognition experiments which indicate the role of encoded domain knowledge on training and adaptation of RNN acoustic models.


 DOI: 10.21437/Interspeech.2017-398

Cite as: Mirsamadi, S., Hansen, J.H. (2017) On Multi-Domain Training and Adaptation of End-to-End RNN Acoustic Models for Distant Speech Recognition. Proc. Interspeech 2017, 404-408, DOI: 10.21437/Interspeech.2017-398.


@inproceedings{Mirsamadi2017,
  author={Seyedmahdad Mirsamadi and John H.L. Hansen},
  title={On Multi-Domain Training and Adaptation of End-to-End RNN Acoustic Models for Distant Speech Recognition},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={404--408},
  doi={10.21437/Interspeech.2017-398},
  url={http://dx.doi.org/10.21437/Interspeech.2017-398}
}