Unsupervised Adaptation with Interpretable Disentangled Representations for Distant Conversational Speech Recognition

Wei-Ning Hsu, Hao Tang, James Glass


The current trend in automatic speech recognition is to leverage large amounts of labeled data to train supervised neural network models. Unfortunately, obtaining data for a wide range of domains to train robust models can be costly. However, it is relatively inexpensive to collect large amounts of unlabeled data from domains that we want the models to generalize to. In this paper, we propose a novel unsupervised adaptation method that learns to synthesize labeled data for the target domain from unlabeled in-domain data and labeled out-of-domain data. We first learn without supervision an interpretable latent representation of speech that encodes linguistic and nuisance factors (e.g., speaker and channel) using different latent variables. To transform a labeled out-of-domain utterance without altering its transcript, we transform the latent nuisance variables while maintaining the linguistic variables. To demonstrate our approach, we focus on a channel mismatch setting, where the domain of interest is distant conversational speech and labels are only available for close-talking speech. Our proposed method is evaluated on the AMI dataset, outperforming all baselines and bridging the gap between unadapted and in-domain models by over 77% without using any parallel data.


 DOI: 10.21437/Interspeech.2018-1097

Cite as: Hsu, W., Tang, H., Glass, J. (2018) Unsupervised Adaptation with Interpretable Disentangled Representations for Distant Conversational Speech Recognition. Proc. Interspeech 2018, 1576-1580, DOI: 10.21437/Interspeech.2018-1097.


@inproceedings{Hsu2018,
  author={Wei-Ning Hsu and Hao Tang and James Glass},
  title={Unsupervised Adaptation with Interpretable Disentangled Representations for Distant Conversational Speech Recognition},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1576--1580},
  doi={10.21437/Interspeech.2018-1097},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1097}
}