Reducing Interference with Phase Recovery in DNN-based Monaural Singing Voice Separation

Paul Magron, Konstantinos Drossos, Stylianos Ioannis Mimilakis, Tuomas Virtanen


State-of-the-art methods for monaural singing voice separation consist in estimating the magnitude spectrum of the voice in the short-time Fourier transform (STFT) domain by means of deep neural networks (DNNs). The resulting magnitude estimate is then combined with the mixture's phase to retrieve the complex-valued STFT of the voice, which is further synthesized into a time-domain signal. However, when the sources overlap in time and frequency, the STFT phase of the voice differs from the mixture's phase, which results in interference and artifacts in the estimated signals. In this paper, we investigate on recent phase recovery algorithms that tackle this issue and can further enhance the separation quality. These algorithms exploit phase constraints that originate from a sinusoidal model or from consistency, a property that is a direct consequence of the STFT redundancy. Experiments conducted on real music songs show that those algorithms are efficient for reducing interference in the estimated voice compared to the baseline approach.


 DOI: 10.21437/Interspeech.2018-1845

Cite as: Magron, P., Drossos, K., Ioannis Mimilakis, S., Virtanen, T. (2018) Reducing Interference with Phase Recovery in DNN-based Monaural Singing Voice Separation. Proc. Interspeech 2018, 332-336, DOI: 10.21437/Interspeech.2018-1845.


@inproceedings{Magron2018,
  author={Paul Magron and Konstantinos Drossos and Stylianos {Ioannis Mimilakis} and Tuomas Virtanen},
  title={Reducing Interference with Phase Recovery in DNN-based Monaural Singing Voice Separation},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={332--336},
  doi={10.21437/Interspeech.2018-1845},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1845}
}