Towards Automated Single Channel Source Separation Using Neural Networks

Arpita Gang, Pravesh Biyani, Akshay Soni


Many applications of single channel source separation (SCSS) including automatic speech recognition (ASR), hearing aids etc. require an estimation of only one source from a mixture of many sources. Treating this special case as a regular SCSS problem where in all constituent sources are given equal priority in terms of reconstruction may result in a suboptimal separation performance. In this paper, we tackle the one source separation problem by suitably modifying the orthodox SCSS framework and focus only on one source at a time. The proposed approach is a generic framework that can be applied to any existing SCSS algorithm, improves performance and scales well when there are more than two sources in the mixture unlike most existing SCSS methods. Additionally, existing SCSS algorithms rely on fine hyper-parameter tuning hence making them difficult to use in practice. Our framework takes a step towards automatic tuning of the hyper-parameters thereby making our method better suited for the mixture to be separated and thus practically more useful. We test our framework on a neural network based algorithm and the results show an improved performance in terms of SDR and SAR.


 DOI: 10.21437/Interspeech.2018-2065

Cite as: Gang, A., Biyani, P., Soni, A. (2018) Towards Automated Single Channel Source Separation Using Neural Networks. Proc. Interspeech 2018, 3494-3498, DOI: 10.21437/Interspeech.2018-2065.


@inproceedings{Gang2018,
  author={Arpita Gang and Pravesh Biyani and Akshay Soni},
  title={Towards Automated Single Channel Source Separation Using Neural Networks},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3494--3498},
  doi={10.21437/Interspeech.2018-2065},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2065}
}