Learning Speaker Representations with Mutual Information

Mirco Ravanelli, Yoshua Bengio


Learning good representations is of crucial importance in deep learning. Mutual Information (MI) or similar measures of statistical dependence are promising tools for learning these representations in an unsupervised way. Even though the mutual information between two random variables is hard to measure directly in high dimensional spaces, some recent studies have shown that an implicit optimization of MI can be achieved with an encoder-discriminator architecture similar to that of Generative Adversarial Networks (GANs).

In this work, we learn representations that capture speaker identities by maximizing the mutual information between the encoded representations of chunks of speech randomly sampled from the same sentence. The proposed encoder relies on the SincNet architecture and transforms raw speech waveform into a compact feature vector. The discriminator is fed by either positive samples (of the joint distribution of encoded chunks) or negative samples (from the product of the marginals) and is trained to separate them.

We report experiments showing that this approach effectively learns useful speaker representations, leading to promising results on speaker identification and verification tasks. Our experiments consider both unsupervised and semi-supervised settings and compare the performance achieved with different objective functions.


 DOI: 10.21437/Interspeech.2019-2380

Cite as: Ravanelli, M., Bengio, Y. (2019) Learning Speaker Representations with Mutual Information. Proc. Interspeech 2019, 1153-1157, DOI: 10.21437/Interspeech.2019-2380.


@inproceedings{Ravanelli2019,
  author={Mirco Ravanelli and Yoshua Bengio},
  title={{Learning Speaker Representations with Mutual Information}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1153--1157},
  doi={10.21437/Interspeech.2019-2380},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2380}
}