ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

Multi-task learning for text-dependent speaker verification

Nanxin Chen, Yanmin Qian, Kai Yu

Text-dependent speaker verification uses short utterances and verifies both speaker identity and text contents. Due to this nature, traditional state-of-the-art speaker verification approaches, such as i-vector, may not work well. Recently, there has been interest of applying deep learning to speaker verification, however in previous works, standalone deep learning systems have not achieved state-of-the-art performance and they have to be used in system combination or as tandem features to obtain gains. In this paper, a novel multi-task deep learning framework is proposed for text-dependent speaker verification. First, multi-task deep learning is employed to learn both speaker identity and text information. With the learned network, utterance level average of the outputs of the last hidden layer, referred to as j-vector, means joint-vector, is extracted. Discriminant function, with classes defined as multi-task labels on both speaker and text, is then applied to the j-vectors as the decision function for the closed-set recognition, and Probabilistic Linear Discriminant Analysis (PLDA), with classes defined as on the multi-task labels, is applied to the j-vectors for the verification. Experiments on the RSR2015 corpus showed that the j-vector approach leads to good result on the evaluation data. The proposed multi-task deep learning system achieved 0.54% EER, 0.14% EER for the closed-set condition.

doi: 10.21437/Interspeech.2015-81

Cite as: Chen, N., Qian, Y., Yu, K. (2015) Multi-task learning for text-dependent speaker verification. Proc. Interspeech 2015, 185-189, doi: 10.21437/Interspeech.2015-81

  author={Nanxin Chen and Yanmin Qian and Kai Yu},
  title={{Multi-task learning for text-dependent speaker verification}},
  booktitle={Proc. Interspeech 2015},