Deep Learning has been applied successfully to speech processing. In this paper we propose an architecture for speech synthesis using multiple speakers. Some hidden layers are shared by all the speakers, while there is a specific output layer for each speaker. Objective and perceptual experiments prove that this scheme produces much better results in comparison with single speaker model. Moreover, we also tackle the problem of speaker interpolation by adding a new output layer (α-layer) on top of the multi-output branches. An identifying code is injected into the layer together with acoustic features of many speakers. Experiments show that the α-layer can effectively learn to interpolate the acoustic features between speakers.
Cite as: Pascual, S., Bonafonte, A. (2016) Multi-output RNN-LSTM for multiple speaker speech synthesis with α-interpolation model. Proc. 9th ISCA Workshop on Speech Synthesis Workshop (SSW 9), 112-117, doi: 10.21437/SSW.2016-19
@inproceedings{pascual16_ssw, author={Santiago Pascual and Antonio Bonafonte}, title={{Multi-output RNN-LSTM for multiple speaker speech synthesis with α-interpolation model}}, year=2016, booktitle={Proc. 9th ISCA Workshop on Speech Synthesis Workshop (SSW 9)}, pages={112--117}, doi={10.21437/SSW.2016-19} }