Unspeech: Unsupervised Speech Context Embeddings

Benjamin Milde, Chris Biemann


We introduce "Unspeech" embeddings, which are based on unsupervised learning of context feature representations for spoken language. The embeddings were trained on up to 9500 hours of crawled English speech data without transcriptions or speaker information, by using a straightforward learning objective based on context and non-context discrimination with negative sampling. We use a Siamese convolutional neural network architecture to train Unspeech embeddings and evaluate them on speaker comparison, utterance clustering and as a context feature in TDNN-HMM acoustic models trained on TED-LIUM, comparing it to i-vector baselines. Particularly decoding out-of-domain speech data from the recently released Common Voice corpus shows consistent WER reductions. We release our source code and pre-trained Unspeech models under a permissive open source license.


 DOI: 10.21437/Interspeech.2018-2194

Cite as: Milde, B., Biemann, C. (2018) Unspeech: Unsupervised Speech Context Embeddings. Proc. Interspeech 2018, 2693-2697, DOI: 10.21437/Interspeech.2018-2194.


@inproceedings{Milde2018,
  author={Benjamin Milde and Chris Biemann},
  title={Unspeech: Unsupervised Speech Context Embeddings},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2693--2697},
  doi={10.21437/Interspeech.2018-2194},
  url={http://dx.doi.org/10.21437/Interspeech.2018-2194}
}