Joint Learning of Speaker and Phonetic Similarities with Siamese Networks

Neil Zeghidour, Gabriel Synnaeve, Nicolas Usunier, Emmanuel Dupoux

Recent work has demonstrated, on small datasets, the feasibility of jointly learning specialized speaker and phone embeddings, in a weakly supervised siamese DNN architecture using word and speaker identity as side information. Here, we scale up these architectures to the 360 hours of the Librispeech corpus by implementing a sampling method to efficiently select pairs of words from the dataset and improving the loss function. We also compare the standard siamese networks fed with same (AA) or different (AB) pairs, to a ‘triamese’ network fed with AAB triplets. We use ABX discrimination tasks to evaluate the discriminability and invariance properties of the obtained joined embeddings, and compare these results with mono-embeddings architectures. We find that the joined embeddings architectures succeed in effectively disentangling speaker from phoneme information, with around 10% errors for the matching tasks and embeddings (speaker task on speaker embeddings, and phone task on phone embedding) and near chance for the mismatched task. Furthermore, the results carry over in out-of-domain datasets, even beating the best results obtained with similar weakly supervised techniques.

DOI: 10.21437/Interspeech.2016-811

Cite as

Zeghidour, N., Synnaeve, G., Usunier, N., Dupoux, E. (2016) Joint Learning of Speaker and Phonetic Similarities with Siamese Networks. Proc. Interspeech 2016, 1295-1299.

author={Neil Zeghidour and Gabriel Synnaeve and Nicolas Usunier and Emmanuel Dupoux},
title={Joint Learning of Speaker and Phonetic Similarities with Siamese Networks},
booktitle={Interspeech 2016},