Large Margin Softmax Loss for Speaker Verification

Yi Liu, Liang He, Jia Liu


In neural network based speaker verification, speaker embedding is expected to be discriminative between speakers while the intra-speaker distance should remain small. A variety of loss functions have been proposed to achieve this goal. In this paper, we investigate the large margin softmax loss with different configurations in speaker verification. Ring loss and minimum hyperspherical energy criterion are introduced to further improve the performance. Results on VoxCeleb show that our best system outperforms the baseline approach by 15% in EER, and by 13%, 33% in minDCF08 and minDCF10, respectively.


 DOI: 10.21437/Interspeech.2019-2357

Cite as: Liu, Y., He, L., Liu, J. (2019) Large Margin Softmax Loss for Speaker Verification. Proc. Interspeech 2019, 2873-2877, DOI: 10.21437/Interspeech.2019-2357.


@inproceedings{Liu2019,
  author={Yi Liu and Liang He and Jia Liu},
  title={{Large Margin Softmax Loss for Speaker Verification}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2873--2877},
  doi={10.21437/Interspeech.2019-2357},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2357}
}