End-to-End Losses Based on Speaker Basis Vectors and All-Speaker Hard Negative Mining for Speaker Verification

Hee-Soo Heo, Jee-weon Jung, IL-Ho Yang, Sung-Hyun Yoon, Hye-jin Shim, Ha-Jin Yu


In recent years, speaker verification has primarily performed using deep neural networks that are trained to output embeddings from input features such as spectrograms or Mel-filterbank energies. Studies that design various loss functions, including metric learning have been widely explored. In this study, we propose two end-to-end loss functions for speaker verification using the concept of speaker bases, which are trainable parameters. One loss function is designed to further increase the inter-speaker variation, and the other is designed to conduct the identical concept with hard negative mining. Each speaker basis is designed to represent the corresponding speaker in the process of training deep neural networks. In contrast to the conventional loss functions that can consider only a limited number of speakers included in a mini-batch, the proposed loss functions can consider all the speakers in the training set regardless of the mini-batch composition. In particular, the proposed loss functions enable hard negative mining and calculations of between-speaker variations with consideration of all speakers. Through experiments on VoxCeleb1 and VoxCeleb2 datasets, we confirmed that the proposed loss functions could supplement conventional softmax and center loss functions.


 DOI: 10.21437/Interspeech.2019-1986

Cite as: Heo, H., Jung, J., Yang, I., Yoon, S., Shim, H., Yu, H. (2019) End-to-End Losses Based on Speaker Basis Vectors and All-Speaker Hard Negative Mining for Speaker Verification. Proc. Interspeech 2019, 4035-4039, DOI: 10.21437/Interspeech.2019-1986.


@inproceedings{Heo2019,
  author={Hee-Soo Heo and Jee-weon Jung and IL-Ho Yang and Sung-Hyun Yoon and Hye-jin Shim and Ha-Jin Yu},
  title={{End-to-End Losses Based on Speaker Basis Vectors and All-Speaker Hard Negative Mining for Speaker Verification}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4035--4039},
  doi={10.21437/Interspeech.2019-1986},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1986}
}