ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Speech Denoising Without Clean Training Data: A Noise2Noise Approach

Madhav Mahesh Kashyap, Anuj Tambwekar, Krishnamoorthy Manohara, S. Natarajan

This paper tackles the problem of the heavy dependence of clean speech data required by deep learning based audio-denoising methods by showing that it is possible to train deep speech denoising networks using only noisy speech samples. Conventional wisdom dictates that in order to achieve good speech denoising performance, there is a requirement for a large quantity of both noisy speech samples and perfectly clean speech samples, resulting in a need for expensive audio recording equipment and extremely controlled soundproof recording studios. These requirements pose significant challenges in data collection, especially in economically disadvantaged regions and for low resource languages. This work shows that speech denoising deep neural networks can be successfully trained utilizing only noisy training audio. Furthermore it is revealed that such training regimes achieve superior denoising performance over conventional training regimes utilizing clean training audio targets, in cases involving complex noise distributions and low Signal-to-Noise ratios (high noise environments). This is demonstrated through experiments studying the efficacy of our proposed approach over both real-world noises and synthetic noises using the 20 layered Deep Complex U-Net architecture.


doi: 10.21437/Interspeech.2021-1130

Cite as: Kashyap, M.M., Tambwekar, A., Manohara, K., Natarajan, S. (2021) Speech Denoising Without Clean Training Data: A Noise2Noise Approach. Proc. Interspeech 2021, 2716-2720, doi: 10.21437/Interspeech.2021-1130

@inproceedings{kashyap21_interspeech,
  author={Madhav Mahesh Kashyap and Anuj Tambwekar and Krishnamoorthy Manohara and S. Natarajan},
  title={{Speech Denoising Without Clean Training Data: A Noise2Noise Approach}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={2716--2720},
  doi={10.21437/Interspeech.2021-1130}
}