Recently, supervised speech separation has been extensively studied and shown considerable promise. Due to the temporal continuity of speech, speech auditory features and separation targets present prominent spectro-temporal structures and strong correlations over the time-frequency (T-F) domain, which can be exploited for speech separation. However, many supervised speech separation methods independently model each T-F unit with only one target and much ignore these useful information. In this paper, we propose a two-stage multi-target joint learning method to jointly model the related speech separation targets at the frame level. Systematic experiments show that the proposed approach consistently achieves better separation and generalization performances in the low signal-to-noise ratio (SNR) conditions.
Cite as: Nie, S., Liang, S., Xue, W., Zhang, X., Liu, W., Dong, L., Yang, H. (2015) Two-stage multi-target joint learning for monaural speech separation. Proc. Interspeech 2015, 1503-1507, doi: 10.21437/Interspeech.2015-357
@inproceedings{nie15_interspeech, author={Shuai Nie and Shan Liang and Wei Xue and Xueliang Zhang and Wenju Liu and Like Dong and Hong Yang}, title={{Two-stage multi-target joint learning for monaural speech separation}}, year=2015, booktitle={Proc. Interspeech 2015}, pages={1503--1507}, doi={10.21437/Interspeech.2015-357} }