Deep Extractor Network for Target Speaker Recovery from Single Channel Speech Mixtures

Jun Wang, Jie Chen, Dan Su, Lianwu Chen, Meng Yu, Yanmin Qian, Dong Yu


Speaker-aware source separation methods are promising workarounds for major difficulties such as arbitrary source permutation and unknown number of sources. However, it remains challenging to achieve satisfying performance provided a very short available target speaker utterance (anchor). Here we present a novel "deep extractor network" which creates an extractor point for the target speaker in a canonical high dimensional embedding space and pulls together the time-frequency bins corresponding to the target speaker. The proposed model is different from prior works that the carnonical embedding space encodes knowledges of both the anchor and the mixture during training phase: first, embeddings for the anchor and mixture speech are separately constructed in a primary embedding space and then combined as an input to feed-forward layers to transform to a carnonical embedding space which we discover more stable than the primary one. Experimental results show that given a very short utterance, the proposed model can efficiently recover high quality target speech from a mixture, which outperforms various baseline models, with 5.2% and 6.6% relative improvements in SDR and PESQ respectively compared with a baseline oracle deep attracor model. Meanwhile, we show it can be generalized well to more than one interfering speaker.


 DOI: 10.21437/Interspeech.2018-1205

Cite as: Wang, J., Chen, J., Su, D., Chen, L., Yu, M., Qian, Y., Yu, D. (2018) Deep Extractor Network for Target Speaker Recovery from Single Channel Speech Mixtures. Proc. Interspeech 2018, 307-311, DOI: 10.21437/Interspeech.2018-1205.


@inproceedings{Wang2018,
  author={Jun Wang and Jie Chen and Dan Su and Lianwu Chen and Meng Yu and Yanmin Qian and Dong Yu},
  title={Deep Extractor Network for Target Speaker Recovery from Single Channel Speech Mixtures},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={307--311},
  doi={10.21437/Interspeech.2018-1205},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1205}
}