Learning to Adapt: A Meta-learning Approach for Speaker Adaptation

Ondřej Klejch, Joachim Fainberg, Peter Bell


The performance of automatic speech recognition systems can be improved by adapting an acoustic model to compensate for the mismatch between training and testing conditions, for example by adapting to unseen speakers. The success of speaker adaptation methods relies on selecting weights that are suitable for adaptation and using good adaptation schedules to update these weights in order not to overfit to the adaptation data. In this paper we investigate a principled way of adapting all the weights of the acoustic model using a meta-learning. We show that the meta-learner can learn to perform supervised and unsupervised speaker adaptation and that it outperforms a strong baseline adapting LHUC parameters when adapting a DNN AM with 1.5M parameters. We also report initial experiments on adapting TDNN AMs, where the meta-learner achieves comparable performance with LHUC.


 DOI: 10.21437/Interspeech.2018-1244

Cite as: Klejch, O., Fainberg, J., Bell, P. (2018) Learning to Adapt: A Meta-learning Approach for Speaker Adaptation. Proc. Interspeech 2018, 867-871, DOI: 10.21437/Interspeech.2018-1244.


@inproceedings{Klejch2018,
  author={Ondřej Klejch and Joachim Fainberg and Peter Bell},
  title={Learning to Adapt: A Meta-learning Approach for Speaker Adaptation},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={867--871},
  doi={10.21437/Interspeech.2018-1244},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1244}
}