Speech Denoising with Deep Feature Losses

François G. Germain, Qifeng Chen, Vladlen Koltun


We present an end-to-end deep learning approach to denoising speech signals by processing the raw waveform directly. Given input audio containing speech corrupted by an additive background signal, the system aims to produce a processed signal that contains only the speech content. Recent approaches have shown promising results using various deep network architectures. In this paper, we propose to train a fully-convolutional context aggregation network using a deep feature loss. That loss is based on comparing the internal feature activations in a different network, trained for audio classification. Our approach outperforms the state of the art in objective speech quality metrics and in large-scale perceptual experiments with human listeners. It also outperforms an identical network trained using traditional regression losses. The advantage of the new approach is particularly pronounced for the hardest data with the most intrusive background noise, for which denoising is most needed and most challenging.


 DOI: 10.21437/Interspeech.2019-1924

Cite as: Germain, F.G., Chen, Q., Koltun, V. (2019) Speech Denoising with Deep Feature Losses. Proc. Interspeech 2019, 2723-2727, DOI: 10.21437/Interspeech.2019-1924.


@inproceedings{Germain2019,
  author={François G. Germain and Qifeng Chen and Vladlen Koltun},
  title={{Speech Denoising with Deep Feature Losses}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2723--2727},
  doi={10.21437/Interspeech.2019-1924},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1924}
}