The DKU-SMIIP System for NIST 2018 Speaker Recognition Evaluation

Danwei Cai, Weicheng Cai, Ming Li


In this paper, we present the system submission for the NIST 2018 Speaker Recognition Evaluation by DKU Speech and Multi-Modal Intelligent Information Processing (SMIIP) Lab. We explore various kinds of state-of-the-art front-end extractors as well as back-end modeling for text-independent speaker verifications. Our submitted primary systems employ multiple state-of-the-art front-end extractors, including the MFCC i-vector, the DNN tandem i-vector, the TDNN x-vector, and the deep ResNet. After speaker embedding is extracted, we exploit several kinds of back-end modeling to perform variability compensation and domain adaptation for mismatch training and testing conditions. The final submitted system on the fixed condition obtains actual detection cost of 0.392 and 0.494 on CMN2 and VAST evaluation data respectively. After the official evaluation, we further extend our experiments by investigating multiple encoding layer designs and loss functions for the deep ResNet system.


 DOI: 10.21437/Interspeech.2019-1436

Cite as: Cai, D., Cai, W., Li, M. (2019) The DKU-SMIIP System for NIST 2018 Speaker Recognition Evaluation. Proc. Interspeech 2019, 4370-4374, DOI: 10.21437/Interspeech.2019-1436.


@inproceedings{Cai2019,
  author={Danwei Cai and Weicheng Cai and Ming Li},
  title={{The DKU-SMIIP System for NIST 2018 Speaker Recognition Evaluation}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4370--4374},
  doi={10.21437/Interspeech.2019-1436},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1436}
}