![]() |
INTERSPEECH 2013
|
![]() |
Semi-supervised discriminative language modeling uses simulated N-best lists instead of real ASR outputs as its training examples. In this study we apply two techniques in which artificial examples are generated using a WFST and an MT system trained on pairs of reference text and ASR output. We compare the performance of these techniques with the structured prediction and ranking variants of the WER-sensitive perceptron algorithm, and contrast with the supervised case where real ASR outputs are given as input. Choosing Turkish statistical morphs as n-gram features, we analyze the similarities between the hypotheses of these three setups and the number of utilized features. We show that the MT-based system yields the lowest WER, not only because the examples generated by this technique are more effective, but also because the ranking perceptron generalizes better with this setup. When trained on a combination of artificial WFST and MT data, the structured perceptron performs as well on an unseen test set as it does when trained on real ASR output.
Bibliographic reference. Dikici, Erinç / Prud'hommeaux, Emily / Roark, Brian / Saraçlar, Murat (2013): "Investigation of MT-based ASR confusion models for semi-supervised discriminative language modeling", In INTERSPEECH-2013, 1218-1222.