Deep Neural Network Embeddings with Gating Mechanisms for Text-Independent Speaker Verification

Lanhua You, Wu Guo, Li-Rong Dai, Jun Du


In this paper, gating mechanisms are applied in deep neural network (DNN) training for x-vector-based text-independent speaker verification. First, a gated convolution neural network (GCNN) is employed for modeling the frame-level embedding layers. Compared with the time-delay DNN (TDNN), the GCNN can obtain more expressive frame-level representations through carefully designed memory cell and gating mechanisms. Moreover, we propose a novel gated-attention statistics pooling strategy in which the attention scores are shared with the output gate. The gated-attention statistics pooling combines both gating and attention mechanisms into one framework; therefore, we can capture more useful information in the temporal pooling layer. Experiments are carried out using the NIST SRE16 and SRE18 evaluation datasets. The results demonstrate the effectiveness of the GCNN and show that the proposed gated-attention statistics pooling can further improve the performance.


 DOI: 10.21437/Interspeech.2019-1746

Cite as: You, L., Guo, W., Dai, L., Du, J. (2019) Deep Neural Network Embeddings with Gating Mechanisms for Text-Independent Speaker Verification. Proc. Interspeech 2019, 1168-1172, DOI: 10.21437/Interspeech.2019-1746.


@inproceedings{You2019,
  author={Lanhua You and Wu Guo and Li-Rong Dai and Jun Du},
  title={{Deep Neural Network Embeddings with Gating Mechanisms for Text-Independent Speaker Verification}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1168--1172},
  doi={10.21437/Interspeech.2019-1746},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1746}
}