An End-to-End Text-Independent Speaker Verification Framework with a Keyword Adversarial Network

Sungrack Yun, Janghoon Cho, Jungyun Eum, Wonil Chang, Kyuwoong Hwang


This paper presents an end-to-end text-independent speaker verification framework by jointly considering the speaker embedding (SE) network and automatic speech recognition (ASR) network. The SE network learns to output an embedding vector which distinguishes the speaker characteristics of the input utterance, while the ASR network learns to recognize the phonetic context of the input. In training our speaker verification framework, we consider both the triplet loss minimization and adversarial gradient of the ASR network to obtain more discriminative and text-independent speaker embedding vectors. With the triplet loss, the distances between the embedding vectors of the same speaker are minimized while those of different speakers are maximized. Also, with the adversarial gradient of the ASR network, the text-dependency of the speaker embedding vector can be reduced. In the experiments, we evaluated our speaker verification framework using the LibriSpeech and CHiME 2013 dataset, and the evaluation results show that our speaker verification framework shows lower equal error rate and better text-independency compared to the other approaches.


 DOI: 10.21437/Interspeech.2019-2208

Cite as: Yun, S., Cho, J., Eum, J., Chang, W., Hwang, K. (2019) An End-to-End Text-Independent Speaker Verification Framework with a Keyword Adversarial Network. Proc. Interspeech 2019, 2923-2927, DOI: 10.21437/Interspeech.2019-2208.


@inproceedings{Yun2019,
  author={Sungrack Yun and Janghoon Cho and Jungyun Eum and Wonil Chang and Kyuwoong Hwang},
  title={{An End-to-End Text-Independent Speaker Verification Framework with a Keyword Adversarial Network}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2923--2927},
  doi={10.21437/Interspeech.2019-2208},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2208}
}