Neural i-vectors

Ville Vestman, Kong Aik Lee, Tomi Kinnunen


Deep speaker embeddings have been demonstrated to outperform their generative counterparts, i-vectors, in recent speaker verification evaluations. To combine the benefits of high performance and generative interpretation, we investigate the use of deep embedding extractor and i-vector extractor in succession. To bundle deep embedding extractors with i-vector extractors, we adopt aggregation layers inspired by the Gaussian mixture model (GMM) to the embedding extractor networks. The inclusion of GMM-like layer allows the discriminatively trained network to be used as a provider of sufficient statistics for the i-vector extractor to extract what we call neural i-vectors. We test our deep embeddings as well as the proposed neural i-vectors on the Speakers in the Wild (SITW) and the Speaker Recognition Evaluation (SRE) 2018 and 2019 datasets. On the core-core condition of SITW, our deep embeddings obtain performance comparative to the state-of-the-art. The neural i-vectors obtain about 50% worse performance than the deep embeddings, but on the other hand outperform the previous i-vector approaches reported in the literature by a clear margin.


 DOI: 10.21437/Odyssey.2020-10

Cite as: Vestman, V., Lee, K.A., Kinnunen, T. (2020) Neural i-vectors. Proc. Odyssey 2020 The Speaker and Language Recognition Workshop, 67-74, DOI: 10.21437/Odyssey.2020-10.


@inproceedings{Vestman2020,
  author={Ville Vestman and Kong Aik Lee and Tomi Kinnunen},
  title={{Neural i-vectors}},
  year=2020,
  booktitle={Proc. Odyssey 2020 The Speaker and Language Recognition Workshop},
  pages={67--74},
  doi={10.21437/Odyssey.2020-10},
  url={http://dx.doi.org/10.21437/Odyssey.2020-10}
}