Lattice-Based Lightly-Supervised Acoustic Model Training

Joachim Fainberg, Ondřej Klejch, Steve Renals, Peter Bell


In the broadcast domain there is an abundance of related text data and partial transcriptions, such as closed captions and subtitles. This text data can be used for lightly supervised training, in which text matching the audio is selected using an existing speech recognition model. Current approaches to light supervision typically filter the data based on matching error rates between the transcriptions and biased decoding hypotheses. In contrast, semi-supervised training does not require matching text data, instead generating a hypothesis using a background language model. State-of-the-art semi-supervised training uses lattice-based supervision with the lattice-free MMI (LF-MMI) objective function. We propose a technique to combine inaccurate transcriptions with the lattices generated for semi-supervised training, thus preserving uncertainty in the lattice where appropriate. We demonstrate that this combined approach reduces the expected error rates over the lattices, and reduces the word error rate (WER) on a broadcast task.


 DOI: 10.21437/Interspeech.2019-2533

Cite as: Fainberg, J., Klejch, O., Renals, S., Bell, P. (2019) Lattice-Based Lightly-Supervised Acoustic Model Training. Proc. Interspeech 2019, 1596-1600, DOI: 10.21437/Interspeech.2019-2533.


@inproceedings{Fainberg2019,
  author={Joachim Fainberg and Ondřej Klejch and Steve Renals and Peter Bell},
  title={{Lattice-Based Lightly-Supervised Acoustic Model Training}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={1596--1600},
  doi={10.21437/Interspeech.2019-2533},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2533}
}