ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

On the Learning Dynamics of Semi-Supervised Training for ASR

Electra Wallington, Benji Kershenbaum, Ondřej Klejch, Peter Bell

The use of semi-supervised training (SST) has become an increasingly popular way of increasing the performance of ASR acoustic models without the need for further transcribed speech data. However, the performance of the technique can be very sensitive to the quality of the initial ASR system. This paper undertakes a comprehensive study of the improvements gained with respect to variation in the initial systems, the quantity of untranscribed data used, and the learning schedules. We postulate that the reason SST can be effective even when the initial model is poor is because it enables utterance-level information to be propagated to the frame level, and hence hypothesise that the quality of the language model plays a much larger role than the quality of the acoustic model. In experiments on Tagalog data from the IARPA MATERIAL programme, we find that indeed this is the case, and show that with an appropriately chosen recipe it is possible to achieve over 50% relative WER reductions from SST, even when the WER of the initial system is more than 80%.

doi: 10.21437/Interspeech.2021-1777

Cite as: Wallington, E., Kershenbaum, B., Klejch, O., Bell, P. (2021) On the Learning Dynamics of Semi-Supervised Training for ASR. Proc. Interspeech 2021, 716-720, doi: 10.21437/Interspeech.2021-1777

  author={Electra Wallington and Benji Kershenbaum and Ondřej Klejch and Peter Bell},
  title={{On the Learning Dynamics of Semi-Supervised Training for ASR}},
  booktitle={Proc. Interspeech 2021},