ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

Minimum trajectory error training for deep neural networks, combined with stacked bottleneck features

Zhizheng Wu, Simon King

Recently, Deep Neural Networks (DNNs) have shown promise as an acoustic model for statistical parametric speech synthesis. Their ability to learn complex mappings from linguistic features to acoustic features has advanced the naturalness of synthesis speech significantly. However, because DNN parameter estimation methods typically attempt to minimise the mean squared error of each individual frame in the training data, the dynamic and continuous nature of speech parameters is neglected. In this paper, we propose a training criterion that minimises speech parameter trajectory errors, and so takes dynamic constraints from a wide acoustic context into account during training. We combine this novel training criterion with our previously proposed stacked bottleneck features, which provide wide linguistic context. Both objective and subjective evaluation results confirm the effectiveness of the proposed training criterion for improving model accuracy and naturalness of synthesised speech.


doi: 10.21437/Interspeech.2015-123

Cite as: Wu, Z., King, S. (2015) Minimum trajectory error training for deep neural networks, combined with stacked bottleneck features. Proc. Interspeech 2015, 309-313, doi: 10.21437/Interspeech.2015-123

@inproceedings{wu15_interspeech,
  author={Zhizheng Wu and Simon King},
  title={{Minimum trajectory error training for deep neural networks, combined with stacked bottleneck features}},
  year=2015,
  booktitle={Proc. Interspeech 2015},
  pages={309--313},
  doi={10.21437/Interspeech.2015-123}
}