International Workshop on Spoken Language Translation (IWSLT) 2012

Hong Kong
December 6-7, 2012

Focusing Language Models For Automatic Speech Recognition

Daniele Falavigna, Roberto Gretter

HLT research unit, FBK, Povo (TN), Italy

This paper describes a method for selecting text data from a corpus with the aim of training auxiliary Language Models (LMs) for an Automatic Speech Recognition (ASR) system. A novel similarity score function is proposed, which allows to score each document belonging to the corpus in order to select those with the highest scores for training auxiliary LMs which are linearly interpolated with the baseline one. The similarity score function makes use of "similarity models" built from the automatic transcriptions furnished by earlier stages of the ASR system, while the documents selected for training auxiliary LMs are drawn from the same set of data used to train the baseline LM used in the ASR system. In this way, the resulting interpolated LMs are "focused" towards the output of the recognizer itself.
    The approach allows to improve word error rate, measured on a task of spontaneous speech, of about 3% relative. It is important to note that a similar improvement has been obtained using an "in-domain" set of texts data not contained in the sources used to train the baseline LM.
    In addition, we compared the proposed similarity score function with two other ones based on perplexity (PP) and on TFxIDF (Term Frequency x Inverse Document Frequency) vector space model. The proposed approach provides about the same performance as that based on TFxIDF model but requires both lower computation and occupation memory.

Full Paper    Presentation

Bibliographic reference.  Falavigna, Daniele / Gretter, Roberto (2012): "Focusing language models for automatic speech recognition", In IWSLT-2012, 171-178.