ISCA Archive Interspeech 2016
ISCA Archive Interspeech 2016

Single-Channel Multi-Speaker Separation Using Deep Clustering

Yusuf Isik, Jonathan Le Roux, Zhuo Chen, Shinji Watanabe, John R. Hershey

Deep clustering is a recently introduced deep learning architecture that uses discriminatively trained embeddings as the basis for clustering. It was recently applied to spectrogram segmentation, resulting in impressive results on speaker-independent multi-speaker separation. In this paper we extend the baseline system with an end-to-end signal approximation objective that greatly improves performance on a challenging speech separation. We first significantly improve upon the baseline system performance by incorporating better regularization, larger temporal context, and a deeper architecture, culminating in an overall improvement in signal to distortion ratio (SDR) of 10.3 dB compared to the baseline of 6.0 dB for two-speaker separation, as well as a 7.1 dB SDR improvement for three-speaker separation. We then extend the model to incorporate an enhancement layer to refine the signal estimates, and perform end-to-end training through both the clustering and enhancement stages to maximize signal fidelity. We evaluate the results using automatic speech recognition. The new signal approximation objective, combined with end-to-end training, produces unprecedented performance, reducing the word error rate (WER) from 89.1% down to 30.8%. This represents a major advancement towards solving the cocktail party problem.


doi: 10.21437/Interspeech.2016-1176

Cite as: Isik, Y., Roux, J.L., Chen, Z., Watanabe, S., Hershey, J.R. (2016) Single-Channel Multi-Speaker Separation Using Deep Clustering. Proc. Interspeech 2016, 545-549, doi: 10.21437/Interspeech.2016-1176

@inproceedings{isik16_interspeech,
  author={Yusuf Isik and Jonathan Le Roux and Zhuo Chen and Shinji Watanabe and John R. Hershey},
  title={{Single-Channel Multi-Speaker Separation Using Deep Clustering}},
  year=2016,
  booktitle={Proc. Interspeech 2016},
  pages={545--549},
  doi={10.21437/Interspeech.2016-1176}
}