ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

Speaker-specific retraining for enhanced compression of unit selection text-to-speech databases

Jani Nurminen, Hanna Silén, Moncef Gabbouj

Unit selection based text-to-speech systems can generally obtain high speech quality provided that the database is large enough. In embedded applications, the related memory requirements may be excessive and often the database needs to be both pruned and compressed to fit it into the available memory space. In this paper, we study the topic of database compression. In particular, the focus is on speaker-specific optimization of the quantizers used in the database compression. First, we introduce the simple concept of dynamic quantizer structures, facilitating the use of speaker-specific optimizations by enabling convenient run-time updates. Second, we show that significant memory savings can be obtained through speaker-specific retraining while perfectly maintaining the quantization accuracy, even when the memory required for the additional codebook data is taken into account. Thus, the proposed approach can be considered effective in reducing the conventionally large footprint of unit selection based text-to-speech systems.


doi: 10.21437/Interspeech.2013-106

Cite as: Nurminen, J., Silén, H., Gabbouj, M. (2013) Speaker-specific retraining for enhanced compression of unit selection text-to-speech databases. Proc. Interspeech 2013, 388-391, doi: 10.21437/Interspeech.2013-106

@inproceedings{nurminen13_interspeech,
  author={Jani Nurminen and Hanna Silén and Moncef Gabbouj},
  title={{Speaker-specific retraining for enhanced compression of unit selection text-to-speech databases}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={388--391},
  doi={10.21437/Interspeech.2013-106}
}