Speech Enhancement for a Noise-Robust Text-to-Speech Synthesis System Using Deep Recurrent Neural Networks

Cassia Valentini-Botinhao, Xin Wang, Shinji Takaki, Junichi Yamagishi


Quality of text-to-speech voices built from noisy recordings is diminished. In order to improve it we propose the use of a recurrent neural network to enhance acoustic parameters prior to training. We trained a deep recurrent neural network using a parallel database of noisy and clean acoustics parameters as input and output of the network. The database consisted of multiple speakers and diverse noise conditions. We investigated using text-derived features as an additional input of the network. We processed a noisy database of two other speakers using this network and used its output to train an HMM acoustic text-to-synthesis model for each voice. Listening experiment results showed that the voice built with enhanced parameters was ranked significantly higher than the ones trained with noisy speech and speech that has been enhanced using a conventional enhancement system. The text-derived features improved results only for the female voice, where it was ranked as highly as a voice trained with clean speech.


DOI: 10.21437/Interspeech.2016-159

Cite as

Valentini-Botinhao, C., Wang, X., Takaki, S., Yamagishi, J. (2016) Speech Enhancement for a Noise-Robust Text-to-Speech Synthesis System Using Deep Recurrent Neural Networks. Proc. Interspeech 2016, 352-356.

Bibtex
@inproceedings{Valentini-Botinhao+2016,
author={Cassia Valentini-Botinhao and Xin Wang and Shinji Takaki and Junichi Yamagishi},
title={Speech Enhancement for a Noise-Robust Text-to-Speech Synthesis System Using Deep Recurrent Neural Networks},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-159},
url={http://dx.doi.org/10.21437/Interspeech.2016-159},
pages={352--356}
}