Deep Noise Tracking Network: A Hybrid Signal Processing/Deep Learning Approach to Speech Enhancement

Shuai Nie, Shan Liang, Bin Liu, Yaping Zhang, Wenju Liu, Jianhua Tao


Noise statistics and speech spectrum characteristics are the essential information for the single channel speech enhancement. The signal processing-based methods mainly rely on noise statistics estimation. They perform very well for stationary noise, but have remained difficult to cope with non-stationary noise. While the deep learning-based methods mainly focus on the perception on the spectrum characteristics of speech and have a capacity in dealing with non-stationary noise. However, the performance would degrade dramatically for the unseen noise types, which could be due to the over-reliance on data and the ignorance to domain knowledge of signal process. Obviously, the hybrid signal processing/deep learning scheme may be a smart alternative. In this paper, we incorporate the powerful perceptual capabilities of deep learning in the conventional speech enhancement framework. Deep learning is used to estimate the speech presence probability and the update factor of noise statistics, which are then integrated into the Wiener filter-based speech enhancement structure to enhance the desired speech. All components are jointly optimized by a spectrum approximation objective. Systematic experiments on CHiME-4 and NOISEX-92 demonstrate the proposed hybrid signal processing/deep learning approach to noise suppression in noise-unmatched and noise-matched conditions.


 DOI: 10.21437/Interspeech.2018-1020

Cite as: Nie, S., Liang, S., Liu, B., Zhang, Y., Liu, W., Tao, J. (2018) Deep Noise Tracking Network: A Hybrid Signal Processing/Deep Learning Approach to Speech Enhancement. Proc. Interspeech 2018, 3219-3223, DOI: 10.21437/Interspeech.2018-1020.


@inproceedings{Nie2018,
  author={Shuai Nie and Shan Liang and Bin Liu and Yaping Zhang and Wenju Liu and Jianhua Tao},
  title={Deep Noise Tracking Network: A Hybrid Signal Processing/Deep Learning Approach to Speech Enhancement},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3219--3223},
  doi={10.21437/Interspeech.2018-1020},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1020}
}