Probabilistic Embeddings for Speaker Diarization

Anna Silnova, Niko Brummer, Johan Rohdin, Themos Stafylakis, Lukas Burget


Speaker embeddings (x-vectors) extracted from very short segments of speech have recently been shown to give competitive performance in speaker diarization. We generalize this recipe by extracting from each speech segment, in parallel with the x-vector, also a diagonal precision matrix, thus providing a path for the propagation of information about the quality of the speech segment into a PLDA scoring backend. These precisions quantify the uncertainty about what the values of the embeddings might have been if they had been extracted from high quality speech segments. The proposed \emph{probabilistic embeddings} (x-vectors with precisions) are interfaced with the PLDA model by treating the x-vectors as hidden variables and marginalizing them out. We apply the proposed probabilistic embeddings as input to an agglomerative hierarchical clustering (AHC) algorithm to do diarization in the DIHARD'19 evaluation set. We compute the full PLDA likelihood `by the book' for each clustering hypothesis that is considered by AHC. We show that this gives accuracy gains relative to a baseline AHC algorithm, applied to traditional x-vectors (without uncertainty), and which uses averaging of binary log-likelihood-ratios, rather than by-the-book scoring.


 DOI: 10.21437/Odyssey.2020-4

Cite as: Silnova, A., Brummer, N., Rohdin, J., Stafylakis, T., Burget, L. (2020) Probabilistic Embeddings for Speaker Diarization. Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 24-31, DOI: 10.21437/Odyssey.2020-4.


@inproceedings{Silnova2020,
  author={Anna Silnova and Niko Brummer and Johan Rohdin and Themos Stafylakis and Lukas Burget},
  title={{Probabilistic Embeddings for Speaker Diarization}},
  year=2020,
  booktitle={Proc. Odyssey 2020 The Speaker and Language Recognition Workshop},
  pages={24--31},
  doi={10.21437/Odyssey.2020-4},
  url={http://dx.doi.org/10.21437/Odyssey.2020-4}
}