An Improved Deep Embedding Learning Method for Short Duration Speaker Verification

Zhifu Gao, Yan Song, Ian McLoughlin, Wu Guo, Lirong Dai


This paper presents an improved deep embedding learning method based on convolutional neural network (CNN) for short-duration speaker verification (SV). Existing deep learning-based SV methods generally extract frontend embeddings from a feed-forward deep neural network, in which the long-term speaker characteristics are captured via a pooling operation over the input speech. The extracted embeddings are then scored via a backend model, such as Probabilistic Linear Discriminative Analysis (PLDA). Two improvements are proposed for frontend embedding learning based on the CNN structure: (1) Motivated by the WaveNet for speech synthesis, dilated filters are designed to achieve a tradeoff between computational efficiency and receptive-filter size; and (2) A novel cross-convolutional-layer pooling method is exploited to capture 1st-order statistics for modelling long-term speaker characteristics. Specifically, the activations of one convolutional layer are aggregated with the guidance of the feature maps from the successive layer. To evaluate the effectiveness of our proposed methods, extensive experiments are conducted on the modified female portion of NIST SRE 2010 evaluations, with conditions ranging from 10s-10s to 5s-4s. Excellent performance has been achieved on each evaluation condition, significantly outperforming existing SV systems using i-vector and d-vector embeddings.


 DOI: 10.21437/Interspeech.2018-1515

Cite as: Gao, Z., Song, Y., McLoughlin, I., Guo, W., Dai, L. (2018) An Improved Deep Embedding Learning Method for Short Duration Speaker Verification. Proc. Interspeech 2018, 3578-3582, DOI: 10.21437/Interspeech.2018-1515.


@inproceedings{Gao2018,
  author={Zhifu Gao and Yan Song and Ian McLoughlin and Wu Guo and Lirong Dai},
  title={An Improved Deep Embedding Learning Method for Short Duration Speaker Verification},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={3578--3582},
  doi={10.21437/Interspeech.2018-1515},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1515}
}