Deep Speech Denoising with Vector Space Projections

Jeffrey Hetherly, Paul Gamble, Maria Alejandra Barrios, Cory Stephenson, Karl Ni


We propose an algorithm to denoise speakers from a single microphone in the presence of non-stationary and dynamic noise. Our approach is inspired by the recent success of neural network models separating speakers from other speakers and singers from instrumental accompaniment. Unlike prior art, we leverage embedding spaces produced with source-contrastive estimation, a technique derived from negative sampling techniques in natural language processing, while simultaneously obtaining a continuous inference mask. Our embedding space directly optimizes for the discrimination of speaker and noise by jointly modeling their characteristics. This space is generalizable in that it is not speaker or noise specific and is capable of denoising speech even if the model has not seen the speaker in the training set. Parameters are trained with dual objectives: one that promotes a selective bandpass filter that eliminates noise at time-frequency positions that exceed signal power and another that proportionally splits time-frequency content between signal and noise. We compare to state of the art algorithms as well as traditional sparse non-negative matrix factorization solutions. The resulting algorithm avoids severe computational burden by providing a more intuitive and easily optimized approach, while achieving competitive accuracy.


 DOI: 10.21437/Interspeech.2018-83

Cite as: Hetherly, J., Gamble, P., Barrios, M.A., Stephenson, C., Ni, K. (2018) Deep Speech Denoising with Vector Space Projections. Proc. Interspeech 2018, 3474-3478, DOI: 10.21437/Interspeech.2018-83.


@inproceedings{Hetherly2018,
  author={Jeffrey Hetherly and Paul Gamble and Maria Alejandra Barrios and Cory Stephenson and Karl Ni},
  title={Deep Speech Denoising with Vector Space Projections},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3474--3478},
  doi={10.21437/Interspeech.2018-83},
  url={http://dx.doi.org/10.21437/Interspeech.2018-83}
}