A Unified Bayesian Source Modelling for Determined Blind Source Separation

Chaitanya Narisetty


This paper proposes a determined blind source separation (BSS) method with a Bayesian generalization for unified modelling of multiple audio sources. Our probabilistic framework allows a flexible multi-source modelling where the number of latent features required for the unified model is optimally estimated. When partitioning the latent features of the unified model to represent individual sources, the proposed approach helps to avoid over-fitting or under-fitting the correlations among sources. This adaptability of our Bayesian generalization therefore adds flexibility to conventional BSS approaches, where the number of latent features in the unified model has to be specified in advance. In the task of separating speech mixture signals, we show that our proposed method models diverse sources in a flexible manner and markedly improves the separation performance as compared to the conventional methods.


 DOI: 10.21437/Interspeech.2019-1272

Cite as: Narisetty, C. (2019) A Unified Bayesian Source Modelling for Determined Blind Source Separation. Proc. Interspeech 2019, 1343-1347, DOI: 10.21437/Interspeech.2019-1272.


@inproceedings{Narisetty2019,
  author={Chaitanya Narisetty},
  title={{A Unified Bayesian Source Modelling for Determined Blind Source Separation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1343--1347},
  doi={10.21437/Interspeech.2019-1272},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1272}
}