Neural Speech Turn Segmentation and Affinity Propagation for Speaker Diarization

Ruiqing Yin, Hervé Bredin, Claude Barras


Speaker diarization is the task of determining who speaks when in an audio stream. Most diarization systems rely on statistical models to address four sub-tasks: speech activity detection (SAD), speaker change detection (SCD), speech turn clustering and re-segmentation. First, following the recent success of recurrent neural networks (RNN) for SAD and SCD, we propose to address re-segmentation with Long-Short Term Memory (LSTM) networks. Then, we propose to use affinity propagation on top of neural speaker embeddings for speech turn clustering, outperforming regular Hierarchical Agglomerative Clustering (HAC). Finally, all these modules are combined and jointly optimized to form a speaker diarization pipeline in which all but the clustering step are based on RNNs. We provide experimental results on the French Broadcast dataset ETAPE where we reach state-of-the-art performance.


 DOI: 10.21437/Interspeech.2018-1750

Cite as: Yin, R., Bredin, H., Barras, C. (2018) Neural Speech Turn Segmentation and Affinity Propagation for Speaker Diarization. Proc. Interspeech 2018, 1393-1397, DOI: 10.21437/Interspeech.2018-1750.


@inproceedings{Yin2018,
  author={Ruiqing Yin and Hervé Bredin and Claude Barras},
  title={Neural Speech Turn Segmentation and Affinity Propagation for Speaker Diarization},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={1393--1397},
  doi={10.21437/Interspeech.2018-1750},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1750}
}