ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

Speech enhancement based on deep denoising autoencoder

Xugang Lu, Yu Tsao, Shigeki Matsuda, Chiori Hori

We previously have applied deep autoencoder (DAE) for noise reduction and speech enhancement. However, the DAE was trained using only clean speech. In this study, by using noisy-clean training pairs, we further introduce a denoising process in learning the DAE. In training the DAE, we still adopt greedy layer-wised pretraining plus fine tuning strategy. In pretraining, each layer is trained as a one-hidden-layer neural autoencoder (AE) using noisy-clean speech pairs as input and output (or transformed noisy-clean speech pairs by preceding AEs). Fine tuning was done by stacking all AEs with pretrained parameters for initialization. The trained DAE is used as a filter for speech estimation when noisy speech is given. Speech enhancement experiments were done to examine the performance of the trained denoising DAE. Noise reduction, speech distortion, and perceptual evaluation of speech quality (PESQ) criteria are used in the performance evaluations. Experimental results show that adding depth of the DAE consistently increase the performance when a large training data set is given. In addition, compared with a minimum mean square error based speech enhancement algorithm, our proposed denoising DAE provided superior performance on the three objective evaluations.


doi: 10.21437/Interspeech.2013-130

Cite as: Lu, X., Tsao, Y., Matsuda, S., Hori, C. (2013) Speech enhancement based on deep denoising autoencoder. Proc. Interspeech 2013, 436-440, doi: 10.21437/Interspeech.2013-130

@inproceedings{lu13_interspeech,
  author={Xugang Lu and Yu Tsao and Shigeki Matsuda and Chiori Hori},
  title={{Speech enhancement based on deep denoising autoencoder}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={436--440},
  doi={10.21437/Interspeech.2013-130}
}