ISCA Archive Interspeech 2013
ISCA Archive Interspeech 2013

Improving lightly supervised training for broadcast transcription

Y. Long, M. J. F. Gales, P. Lanchantin, X. Liu, M. S. Seigel, P. C. Woodland

This paper investigates improving lightly supervised acoustic model training for an archive of broadcast data. Standard lightly supervised training uses automatically derived decoding hypotheses using a biased language model. However, as the actual speech can deviate significantly from the original programme scripts that are supplied, the quality of standard lightly supervised hypotheses can be poor. To address this issue, word and segment level combination approaches are used between the lightly supervised transcripts and the original programme scripts which yield improved transcriptions. Experimental results show that systems trained using these improved transcriptions consistently outperform those trained using only the original lightly supervised decoding hypotheses. This is shown to be the case for both the maximum likelihood and minimum phone error trained systems.


doi: 10.21437/Interspeech.2013-516

Cite as: Long, Y., Gales, M.J.F., Lanchantin, P., Liu, X., Seigel, M.S., Woodland, P.C. (2013) Improving lightly supervised training for broadcast transcription. Proc. Interspeech 2013, 2187-2191, doi: 10.21437/Interspeech.2013-516

@inproceedings{long13_interspeech,
  author={Y. Long and M. J. F. Gales and P. Lanchantin and X. Liu and M. S. Seigel and P. C. Woodland},
  title={{Improving lightly supervised training for broadcast transcription}},
  year=2013,
  booktitle={Proc. Interspeech 2013},
  pages={2187--2191},
  doi={10.21437/Interspeech.2013-516}
}