Embedding-Based Speaker Adaptive Training of Deep Neural Networks

Xiaodong Cui, Vaibhava Goel, George Saon


An embedding-based speaker adaptive training (SAT) approach is proposed and investigated in this paper for deep neural network acoustic modeling. In this approach, speaker embedding vectors, which are a constant given a particular speaker, are mapped through a control network to layer-dependent element-wise affine transformations to canonicalize the internal feature representations at the output of hidden layers of a main network. The control network for generating the speaker-dependent mappings are jointly estimated with the main network for the overall speaker adaptive acoustic modeling. Experiments on large vocabulary continuous speech recognition (LVCSR) tasks show that the proposed SAT scheme can yield superior performance over the widely-used speaker-aware training using i-vectors with speaker-adapted input features.


 DOI: 10.21437/Interspeech.2017-460

Cite as: Cui, X., Goel, V., Saon, G. (2017) Embedding-Based Speaker Adaptive Training of Deep Neural Networks. Proc. Interspeech 2017, 122-126, DOI: 10.21437/Interspeech.2017-460.


@inproceedings{Cui2017,
  author={Xiaodong Cui and Vaibhava Goel and George Saon},
  title={Embedding-Based Speaker Adaptive Training of Deep Neural Networks},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={122--126},
  doi={10.21437/Interspeech.2017-460},
  url={http://dx.doi.org/10.21437/Interspeech.2017-460}
}