ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

wav2vec-C: A Self-Supervised Model for Speech Representation Learning

Samik Sadhu, Di He, Che-Wei Huang, Sri Harish Mallidi, Minhua Wu, Ariya Rastrow, Andreas Stolcke, Jasha Droppo, Roland Maas

wav2vec-C introduces a novel representation learning technique combining elements from wav2vec 2.0 and VQ-VAE. Our model learns to reproduce quantized representations from partially masked speech encoding using a contrastive loss in a way similar to wav2vec 2.0. However, the quantization process is regularized by an additional consistency network that learns to reconstruct the input features to the wav2vec 2.0 network from the quantized representations in a way similar to a VQ-VAE model. The proposed self-supervised model is trained on 10k hours of unlabeled data and subsequently used as the speech encoder in a RNN-T ASR model and fine-tuned with 1k hours of labeled data. This work is one of the very few studies of self-supervised learning on speech tasks with a large volume of real far-field labeled data. The wav2vec-C encoded representations achieve, on average, twice the error reduction over baseline and a higher codebook utilization in comparison to wav2vec 2.0.


doi: 10.21437/Interspeech.2021-717

Cite as: Sadhu, S., He, D., Huang, C.-W., Mallidi, S.H., Wu, M., Rastrow, A., Stolcke, A., Droppo, J., Maas, R. (2021) wav2vec-C: A Self-Supervised Model for Speech Representation Learning. Proc. Interspeech 2021, 711-715, doi: 10.21437/Interspeech.2021-717

@inproceedings{sadhu21_interspeech,
  author={Samik Sadhu and Di He and Che-Wei Huang and Sri Harish Mallidi and Minhua Wu and Ariya Rastrow and Andreas Stolcke and Jasha Droppo and Roland Maas},
  title={{wav2vec-C: A Self-Supervised Model for Speech Representation Learning}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={711--715},
  doi={10.21437/Interspeech.2021-717}
}