Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models

Thomas Drugman, Janne Pylkkönen, Reinhard Kneser


The goal of this paper is to simulate the benefits of jointly applying active learning (AL) and semi-supervised training (SST) in a new speech recognition application. Our data selection approach relies on confidence filtering, and its impact on both the acoustic and language models (AM and LM) is studied. While AL is known to be beneficial to AM training, we show that it also carries out substantial improvements to the LM when combined with SST. Sophisticated confidence models, on the other hand, did not prove to yield any data selection gain. Our results indicate that, while SST is crucial at the beginning of the labeling process, its gains degrade rapidly as AL is set in place. The final simulation reports that AL allows a transcription cost reduction of about 70% over random selection. Alternatively, for a fixed transcription budget, the proposed approach improves the word error rate by about 12.5% relative.


DOI: 10.21437/Interspeech.2016-1382

Cite as

Drugman, T., Pylkkönen, J., Kneser, R. (2016) Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models. Proc. Interspeech 2016, 2318-2322.

Bibtex
@inproceedings{Drugman+2016,
author={Thomas Drugman and Janne Pylkkönen and Reinhard Kneser},
title={Active and Semi-Supervised Learning in ASR: Benefits on the Acoustic and Language Models},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1382},
url={http://dx.doi.org/10.21437/Interspeech.2016-1382},
pages={2318--2322}
}