Neural VTLN for Speaker Adaptation in TTS

Bastian Schnell, Philip N. Garner


Vocal tract length normalisation (VTLN) is well established as a speaker adaptation technique that can work with very little adaptation data. It is also well known that VTLN can be cast as a linear transform in the cepstral domain. Building on this latter property, we show that it can be cast as a (linear) layer in a deep neural network (DNN) for speech synthesis. We show that VTLN parameters can then be trained in the same framework as the rest of the DNN using automatic gradients. Experimental results show that the DNN is capable of predicting phonedependent warpings on artificial data, and that such warpings improve the quality of an acoustic model on real data in subjective listening tests.


 DOI: 10.21437/SSW.2019-6

Cite as: Schnell, B., N. Garner, P. (2019) Neural VTLN for Speaker Adaptation in TTS. Proc. 10th ISCA Speech Synthesis Workshop, 29-34, DOI: 10.21437/SSW.2019-6.


@inproceedings{Schnell2019,
  author={Bastian Schnell and Philip  {N. Garner}},
  title={{Neural VTLN for Speaker Adaptation in TTS}},
  year=2019,
  booktitle={Proc. 10th ISCA Speech Synthesis Workshop},
  pages={29--34},
  doi={10.21437/SSW.2019-6},
  url={http://dx.doi.org/10.21437/SSW.2019-6}
}