Real-time speaker diarization has many potential applications, including public security, biometrics or forensics. It can also significantly speed up the indexing of increasingly large multimedia archives. In this paper, we address the issue of low-latency speaker diarization that consists in continuously detecting new or reoccurring speakers within an audio stream, and determining when each speaker is active with a low latency ( e.g. every second). This is in contrast with most existing approaches in speaker diarization that rely on multiple passes over the complete audio recording. The proposed approach combines speaker turn neural embeddings with an incremental structure prediction approach inspired by state-of-the-art Natural Language Processing models for Part-of-Speech tagging and dependency parsing. It can therefore leverage both information describing the utterance and the inherent temporal structure of interactions between speakers to learn, in supervised framework, to identify speakers. Experiments on the Etape broadcast news benchmark validate the approach.
Cite as: Wisniewksi, G., Bredin, H., Gelly, G., Barras, C. (2017) Combining Speaker Turn Embedding and Incremental Structure Prediction for Low-Latency Speaker Diarization. Proc. Interspeech 2017, 3582-3586, doi: 10.21437/Interspeech.2017-1067
@inproceedings{wisniewksi17_interspeech, author={Guillaume Wisniewksi and Hervé Bredin and G. Gelly and Claude Barras}, title={{Combining Speaker Turn Embedding and Incremental Structure Prediction for Low-Latency Speaker Diarization}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={3582--3586}, doi={10.21437/Interspeech.2017-1067} }