Waveform Generation Based on Signal Reshaping for Statistical Parametric Speech Synthesis

Felipe Espic, Cassia Valentini-Botinhao, Zhizheng Wu, Simon King


We propose a new paradigm of waveform generation for Statistical Parametric Speech Synthesis that is based on neither source-filter separation nor sinusoidal modelling. We suggest that one of the main problems of current vocoding techniques is that they perform an extreme decomposition of the speech signal into source and filter, which is an underlying cause of “buzziness”, “musical artifacts”, or “muffled sound” in the synthetic speech. The proposed method avoids making unnecessary assumptions and decompositions as far as possible, and uses only the spectral envelope and F0 as parameters. Pre-recorded speech is used as a base signal, which is “reshaped” to match the acoustic specification predicted by the statistical model, without any source-filter decomposition. A detailed description of the method is presented, including implementation details and adjustments. Subjective listening test evaluations of complete DNN-based text-to-speech systems were conducted for two voices: one female and one male. The results show that the proposed method tends to outperform the state-of-the-art standard vocoder STRAIGHT, whilst using fewer acoustic parameters.


DOI: 10.21437/Interspeech.2016-487

Cite as

Espic, F., Valentini-Botinhao, C., Wu, Z., King, S. (2016) Waveform Generation Based on Signal Reshaping for Statistical Parametric Speech Synthesis. Proc. Interspeech 2016, 2263-2267.

Bibtex
@inproceedings{Espic+2016,
author={Felipe Espic and Cassia Valentini-Botinhao and Zhizheng Wu and Simon King},
title={Waveform Generation Based on Signal Reshaping for Statistical Parametric Speech Synthesis},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-487},
url={http://dx.doi.org/10.21437/Interspeech.2016-487},
pages={2263--2267}
}