Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification

Daniel Michelsanti, Zheng-Hua Tan


Improving speech system performance in noisy environments remains a challenging task, and speech enhancement (SE) is one of the effective techniques to solve the problem. Motivated by the promising results of generative adversarial networks (GANs) in a variety of image processing tasks, we explore the potential of conditional GANs (cGANs) for SE, and in particular, we make use of the image processing framework proposed by Isola et al. [1] to learn a mapping from the spectrogram of noisy speech to an enhanced counterpart. The SE cGAN consists of two networks, trained in an adversarial manner: a generator that tries to enhance the input noisy spectrogram, and a discriminator that tries to distinguish between enhanced spectrograms provided by the generator and clean ones from the database using the noisy spectrogram as a condition. We evaluate the performance of the cGAN method in terms of perceptual evaluation of speech quality (PESQ), short-time objective intelligibility (STOI), and equal error rate (EER) of speaker verification (an example application). Experimental results show that the cGAN method overall outperforms the classical short-time spectral amplitude minimum mean square error (STSA-MMSE) SE algorithm, and is comparable to a deep neural network-based SE approach (DNN-SE).


 DOI: 10.21437/Interspeech.2017-1620

Cite as: Michelsanti, D., Tan, Z. (2017) Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification. Proc. Interspeech 2017, 2008-2012, DOI: 10.21437/Interspeech.2017-1620.


@inproceedings{Michelsanti2017,
  author={Daniel Michelsanti and Zheng-Hua Tan},
  title={Conditional Generative Adversarial Networks for Speech Enhancement and Noise-Robust Speaker Verification},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2008--2012},
  doi={10.21437/Interspeech.2017-1620},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1620}
}