A Statistically Principled and Computationally Efficient Approach to Speech Enhancement Using Variational Autoencoders

Manuel Pariente, Antoine Deleforge, Emmanuel Vincent


Recent studies have explored the use of deep generative models of speech spectra based on variational autoencoders (VAEs), combined with unsupervised noise models, to perform speech enhancement. These studies developed iterative algorithms involving either Gibbs sampling or gradient descent at each step, making them computationally expensive. This paper proposes a variational inference method to iteratively estimate the power spectrogram of the clean speech. Our main contribution is the analytical derivation of the variational steps in which the encoder of the pre-learned VAE can be used to estimate the variational approximation of the true posterior distribution, using the very same assumption made to train VAEs. Experiments show that the proposed method produces results on par with the aforementioned iterative methods using sampling, while decreasing the computational cost by a factor 36 to reach a given performance.


 DOI: 10.21437/Interspeech.2019-1398

Cite as: Pariente, M., Deleforge, A., Vincent, E. (2019) A Statistically Principled and Computationally Efficient Approach to Speech Enhancement Using Variational Autoencoders. Proc. Interspeech 2019, 3158-3162, DOI: 10.21437/Interspeech.2019-1398.


@inproceedings{Pariente2019,
  author={Manuel Pariente and Antoine Deleforge and Emmanuel Vincent},
  title={{A Statistically Principled and Computationally Efficient Approach to Speech Enhancement Using Variational Autoencoders}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3158--3162},
  doi={10.21437/Interspeech.2019-1398},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1398}
}