ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Speaker Embeddings by Modeling Channel-Wise Correlations

Themos Stafylakis, Johan Rohdin, Lukáš Burget

Speaker embeddings extracted with deep 2D convolutional neural networks are typically modeled as projections of first and second order statistics of channel-frequency pairs onto a linear layer, using either average or attentive pooling along the time axis. In this paper we examine an alternative pooling method, where pairwise correlations between channels for given frequencies are used as statistics. The method is inspired by style-transfer methods in computer vision, where the style of an image, modeled by the matrix of channel-wise correlations, is transferred to another image, in order to produce a new image having the style of the first and the content of the second. By drawing analogies between image style and speaker characteristics, and between image content and phonetic sequence, we explore the use of such channel-wise correlations features to train a ResNet architecture in an end-to-end fashion. Our experiments on VoxCeleb demonstrate the effectiveness of the proposed pooling method in speaker recognition.


doi: 10.21437/Interspeech.2021-1442

Cite as: Stafylakis, T., Rohdin, J., Burget, L. (2021) Speaker Embeddings by Modeling Channel-Wise Correlations. Proc. Interspeech 2021, 501-505, doi: 10.21437/Interspeech.2021-1442

@inproceedings{stafylakis21_interspeech,
  author={Themos Stafylakis and Johan Rohdin and Lukáš Burget},
  title={{Speaker Embeddings by Modeling Channel-Wise Correlations}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={501--505},
  doi={10.21437/Interspeech.2021-1442}
}