MTGAN: Speaker Verification through Multitasking Triplet Generative Adversarial Networks

Wenhao Ding, Liang He


In this paper, we propose an enhanced triplet method that improves the encoding process of embeddings by jointly utilizing generative adversarial mechanism and multitasking optimization. We extend our triplet encoder with Generative Adversarial Networks (GANs) and softmax loss function. GAN is introduced for increasing the generality and diversity of samples, while softmax is for reinforcing features about speakers. For simplification, we term our method Multitasking Triplet Generative Adversarial Networks (MTGAN). Experiment on short utterances demonstrates that MTGAN reduces the verification equal error rate (EER) by 67% (relatively) and 32% (relatively) over conventional i-vector method and state-of-the-art triplet loss method respectively. This effectively indicates that MTGAN outperforms triplet methods in the aspect of expressing the high-level feature of speaker information.


 DOI: 10.21437/Interspeech.2018-1023

Cite as: Ding, W., He, L. (2018) MTGAN: Speaker Verification through Multitasking Triplet Generative Adversarial Networks. Proc. Interspeech 2018, 3633-3637, DOI: 10.21437/Interspeech.2018-1023.


@inproceedings{Ding2018,
  author={Wenhao Ding and Liang He},
  title={MTGAN: Speaker Verification through Multitasking Triplet Generative Adversarial Networks},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3633--3637},
  doi={10.21437/Interspeech.2018-1023},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1023}
}