Forward-Backward Decoding for Regularizing End-to-End TTS

Yibin Zheng, Xi Wang, Lei He, Shifeng Pan, Frank K. Soong, Zhengqi Wen, Jianhua Tao


Neural end-to-end TTS can generate very high-quality synthesized speech, and even close to human recording within similar domain text. However, it performs unsatisfactory when scaling it to challenging test sets. One concern is that the encoder-decoder with attention-based network adopts autoregressive generative sequence model with the limitation of “exposure bias”. To address this issue, we propose two novel methods, which learn to predict future by improving agreement between forward and backward decoding sequence. The first one is achieved by introducing divergence regularization terms into model training objective to reduce the mismatch between two directional models, namely L2R and R2L (which generates targets from left-to-right and right-to-left, respectively). While the second one operates on decoder-level and exploits the future information during decoding. In addition, we employ a joint training strategy to allow forward and backward decoding to improve each other in an interactive process. Experimental results show our proposed methods especially the second one (bidirectional decoder regularization), leads a significantly improvement on both robustness and overall naturalness, as outperforming baseline (the revised version of Tacotron2) with a MOS gap of 0.14 in a challenging test, and achieving close to human quality (4.42 vs. 4.49 in MOS) on general test.


 DOI: 10.21437/Interspeech.2019-2325

Cite as: Zheng, Y., Wang, X., He, L., Pan, S., Soong, F.K., Wen, Z., Tao, J. (2019) Forward-Backward Decoding for Regularizing End-to-End TTS. Proc. Interspeech 2019, 1283-1287, DOI: 10.21437/Interspeech.2019-2325.


@inproceedings{Zheng2019,
  author={Yibin Zheng and Xi Wang and Lei He and Shifeng Pan and Frank K. Soong and Zhengqi Wen and Jianhua Tao},
  title={{Forward-Backward Decoding for Regularizing End-to-End TTS}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1283--1287},
  doi={10.21437/Interspeech.2019-2325},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2325}
}