This paper describes Apple’s hybrid unit selection speech synthesis system, which provides the voices for Siri with the requirement of naturalness, personality and expressivity. It has been deployed into hundreds of millions of desktop and mobile devices (e.g. iPhone, iPad, Mac, etc.) via iOS and macOS in multiple languages. The system is following the classical unit selection framework with the advantage of using deep learning techniques to boost the performance. In particular, deep and recurrent mixture density networks are used to predict the target and concatenation reference distributions for respective costs during unit selection. In this paper, we present an overview of the run-time TTS engine and the voice building process. We also describe various techniques that enable on-device capability such as preselection optimization, caching for low latency, and unit pruning for low footprint, as well as techniques that improve the naturalness and expressivity of the voice such as the use of long units.
Cite as: Capes, T., Coles, P., Conkie, A., Golipour, L., Hadjitarkhani, A., Hu, Q., Huddleston, N., Hunt, M., Li, J., Neeracher, M., Prahallad, K., Raitio, T., Rasipuram, R., Townsend, G., Williamson, B., Winarsky, D., Wu, Z., Zhang, H. (2017) Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System. Proc. Interspeech 2017, 4011-4015, doi: 10.21437/Interspeech.2017-1798
@inproceedings{capes17_interspeech, author={Tim Capes and Paul Coles and Alistair Conkie and Ladan Golipour and Abie Hadjitarkhani and Qiong Hu and Nancy Huddleston and Melvyn Hunt and Jiangchuan Li and Matthias Neeracher and Kishore Prahallad and Tuomo Raitio and Ramya Rasipuram and Greg Townsend and Becci Williamson and David Winarsky and Zhizheng Wu and Hepeng Zhang}, title={{Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={4011--4015}, doi={10.21437/Interspeech.2017-1798} }