16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Deep Neural Network Context Embeddings for Model Selection in Rich-Context HMM Synthesis

Thomas Merritt, Junichi Yamagishi, Zhizheng Wu, Oliver Watts, Simon King

University of Edinburgh, UK

This paper introduces a novel form of parametric synthesis that uses context embeddings produced by the bottleneck layer of a deep neural network to guide the selection of models in a rich-context HMM-based synthesiser. Rich-context synthesis — in which Gaussian distributions estimated from single linguistic contexts seen in the training data are used for synthesis, rather than more conventional decision tree-tied models — was originally proposed to address over-smoothing due to averaging across contexts. Our previous investigations have confirmed experimentally that averaging across different contexts is indeed one of the largest factors contributing to the limited quality of statistical parametric speech synthesis. However, a possible weakness of the rich context approach as previously formulated is that a conventional tied model is still used to guide selection of Gaussians at synthesis time. Our proposed approach replaces this with context embeddings derived from a neural network.

Full Paper

Bibliographic reference.  Merritt, Thomas / Yamagishi, Junichi / Wu, Zhizheng / Watts, Oliver / King, Simon (2015): "Deep neural network context embeddings for model selection in rich-context HMM synthesis", In INTERSPEECH-2015, 2207-2211.