Error Modeling via Asymmetric Laplace Distribution for Deep Neural Network Based Single-Channel Speech Enhancement

Li Chai, Jun Du, Chin-Hui Lee


The minimum mean squared error (MMSE) as a conventional training criterion for deep neural network (DNN) based speech enhancement has been found many problems. In our recent work, a maximum likelihood (ML) approach to parameter learning by modeling the prediction error vector as a Gaussian density was proposed. In this study, our preliminary statistical analysis reveals the super-Gaussianity and asymmetricity of the prediction error distribution. Consequently, we adopt the asymmetric Laplace distribution (ALD) instead of the Gaussian distribution (GD) to model the prediction error vectors. Then the new derivation for optimizing the the proposed ML-ALD-DNN with both DNN and ALD parameters is presented. Moreover, we can well interpret the asymmetry parameter of ALD as the balance control between noise reduction and speech preservation from both formulations and experiments. This implies that the customization of DNN models for the different noise types and levels is possible by the setting of the asymmetry parameter. Finally, our ML-ALD-DNN approach achieves better STOI and SSNR measures over both MMSE-DNN and ML-GD-DNN approaches.


 DOI: 10.21437/Interspeech.2018-1439

Cite as: Chai, L., Du, J., Lee, C. (2018) Error Modeling via Asymmetric Laplace Distribution for Deep Neural Network Based Single-Channel Speech Enhancement. Proc. Interspeech 2018, 3269-3273, DOI: 10.21437/Interspeech.2018-1439.


@inproceedings{Chai2018,
  author={Li Chai and Jun Du and Chin-Hui Lee},
  title={Error Modeling via Asymmetric Laplace Distribution for Deep Neural Network Based Single-Channel Speech Enhancement},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3269--3273},
  doi={10.21437/Interspeech.2018-1439},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1439}
}