Analysis of Length Normalization in End-to-End Speaker Verification System

Weicheng Cai, Jinkun Chen, Ming Li


The classical i-vectors and the latest end-to-end deep speaker embeddings are the two representative categories of utterance-level representations in automatic speaker verification systems. Traditionally, once i-vectors or deep speaker embeddings are extracted, we rely on an extra length normalization step to normalize the representations into unit-length hyperspace before back-end modeling. In this paper, we explore how the neural network learns length-normalized deep speaker embeddings in an end-to-end manner. To this end, we add a length normalization layer followed by a scale layer before the output layer of the common classification network. We conducted experiments on the verification task of the Voxceleb1 dataset. The results show that integrating this simple step in the end-to-end training pipeline significantly boosts the performance of speaker verification. In the testing stage of our L2-normalized end-to-end system, a simple inner-product can achieve the state-of-the-art.


 DOI: 10.21437/Interspeech.2018-92

Cite as: Cai, W., Chen, J., Li, M. (2018) Analysis of Length Normalization in End-to-End Speaker Verification System. Proc. Interspeech 2018, 3618-3622, DOI: 10.21437/Interspeech.2018-92.


@inproceedings{Cai2018,
  author={Weicheng Cai and Jinkun Chen and Ming Li},
  title={Analysis of Length Normalization in End-to-End Speaker Verification System},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3618--3622},
  doi={10.21437/Interspeech.2018-92},
  url={http://dx.doi.org/10.21437/Interspeech.2018-92}
}