Exploiting Semi-Supervised Training Through a Dropout Regularization in End-to-End Speech Recognition

Subhadeep Dey, Petr Motlicek, Trung Bui, Franck Dernoncourt


In this paper, we explore various approaches for semi-supervised learning in an end-to-end automatic speech recognition (ASR) framework. The first step in our approach involves training a seed model on the limited amount of labelled data. Additional unlabelled speech data is employed through a data-selection mechanism to obtain the best hypothesized output, further used to retrain the seed model. However, uncertainties of the model may not be well captured with a single hypothesis. As opposed to this technique, we apply a dropout mechanism to capture the uncertainty by obtaining multiple hypothesized text transcripts of an speech recording. We assume that the diversity of automatically generated transcripts for an utterance will implicitly increase the reliability of the model. Finally, the data-selection process is also applied on these hypothesized transcripts to reduce the uncertainty. Experiments on freely-available TEDLIUM corpus and proprietary Adobe’s internal dataset show that the proposed approach significantly reduces ASR errors, compared to the baseline model.


 DOI: 10.21437/Interspeech.2019-3246

Cite as: Dey, S., Motlicek, P., Bui, T., Dernoncourt, F. (2019) Exploiting Semi-Supervised Training Through a Dropout Regularization in End-to-End Speech Recognition. Proc. Interspeech 2019, 734-738, DOI: 10.21437/Interspeech.2019-3246.


@inproceedings{Dey2019,
  author={Subhadeep Dey and Petr Motlicek and Trung Bui and Franck Dernoncourt},
  title={{Exploiting Semi-Supervised Training Through a Dropout Regularization in End-to-End Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={734--738},
  doi={10.21437/Interspeech.2019-3246},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3246}
}