Improving Source Separation via Multi-Speaker Representations

Jeroen Zegers, Hugo Van hamme


Lately there have been novel developments in deep learning towards solving the cocktail party problem. Initial results are very promising and allow for more research in the domain. One technique that has not yet been explored in the neural network approach to this task is speaker adaptation. Intuitively, information on the speakers that we are trying to separate seems fundamentally important for the speaker separation task. However, retrieving this speaker information is challenging since the speaker identities are not known a priori and multiple speakers are simultaneously active. There is thus some sort of chicken and egg problem. To tackle this, source signals and i-vectors are estimated alternately. We show that blind multi-speaker adaptation improves the results of the network and that (in our case) the network is not capable of adequately retrieving this useful speaker information itself.


 DOI: 10.21437/Interspeech.2017-754

Cite as: Zegers, J., hamme, H.V. (2017) Improving Source Separation via Multi-Speaker Representations. Proc. Interspeech 2017, 1919-1923, DOI: 10.21437/Interspeech.2017-754.


@inproceedings{Zegers2017,
  author={Jeroen Zegers and Hugo Van hamme},
  title={Improving Source Separation via Multi-Speaker Representations},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={1919--1923},
  doi={10.21437/Interspeech.2017-754},
  url={http://dx.doi.org/10.21437/Interspeech.2017-754}
}