Sampling Strategies in Siamese Networks for Unsupervised Speech Representation Learning

Rachid Riad, Corentin Dancette, Julien Karadayi, Neil Zeghidour, Thomas Schatz, Emmanuel Dupoux


Recent studies have investigated siamese network architectures for learning invariant speech representations using same-different side information at the word level. Here we investigate systematically an often ignored component of siamese networks: the sampling procedure (how pairs of same vs. different tokens are selected). We show that sampling strategies taking into account Zipf's Law, the distribution of speakers and the proportions of same and different pairs of words significantly impact the performance of the network. In particular, we show that word frequency compression improves learning across a large range of variations in the number of training pairs. This effect does not apply to the same extent to the fully unsupervised setting, where the pairs of same-different words are obtained by spoken term discovery. We apply these results to pairs of words discovered using an unsupervised algorithm and show an improvement on the state-of-the-art in unsupervised representation learning using siamese networks.


 DOI: 10.21437/Interspeech.2018-2384

Cite as: Riad, R., Dancette, C., Karadayi, J., Zeghidour, N., Schatz, T., Dupoux, E. (2018) Sampling Strategies in Siamese Networks for Unsupervised Speech Representation Learning. Proc. Interspeech 2018, 2658-2662, DOI: 10.21437/Interspeech.2018-2384.


@inproceedings{Riad2018,
  author={Rachid Riad and Corentin Dancette and Julien Karadayi and Neil Zeghidour and Thomas Schatz and Emmanuel Dupoux},
  title={Sampling Strategies in Siamese Networks for Unsupervised Speech Representation Learning},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2658--2662},
  doi={10.21437/Interspeech.2018-2384},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2384}
}