Coarse-to-Fine Optimization for Speech Enhancement

Jian Yao, Ahmad Al-Dahle


In this paper, we propose the coarse-to-fine optimization for the task of speech enhancement. Cosine similarity loss [1] has proven to be an effective metric to measure similarity of speech signals. However, due to the large variance of the enhanced speech with even the same cosine similarity loss in high dimensional space, a deep neural network learnt with this loss might not be able to predict enhanced speech with good quality. Our coarse-to-fine strategy optimizes the cosine similarity loss for different granularities so that more constraints are added to the prediction from high dimension to relatively low dimension. In this way, the enhanced speech will better resemble the clean speech. Experimental results show the effectiveness of our proposed coarse-to-fine optimization in both discriminative models and generative models. Moreover, we apply the coarse-to-fine strategy to the adversarial loss in generative adversarial network (GAN) and propose dynamic perceptual loss, which dynamically computes the adversarial loss from coarse resolution to fine resolution. Dynamic perceptual loss further improves the accuracy and achieves state-of-the-art results compared with other generative models.


 DOI: 10.21437/Interspeech.2019-2792

Cite as: Yao, J., Al-Dahle, A. (2019) Coarse-to-Fine Optimization for Speech Enhancement. Proc. Interspeech 2019, 2743-2747, DOI: 10.21437/Interspeech.2019-2792.


@inproceedings{Yao2019,
  author={Jian Yao and Ahmad Al-Dahle},
  title={{Coarse-to-Fine Optimization for Speech Enhancement}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2743--2747},
  doi={10.21437/Interspeech.2019-2792},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2792}
}