We present a discriminatively trained dependency parser-based language model. The model operates on utterances, rather than words, and so can utilize long-distance structural features of each sentence. We train the model discriminatively on n-best lists, using the perceptron algorithm to tune the model weights. Our features include standard n-gram style features, long-distance co-occurrence features, and syntactic structural features. We evaluate this model by re-ranking n-best lists of recognized speech from the Fisher dataset of informal telephone conversations. We compare various combinations of feature types, and methods of training the model.
Bibliographic reference. Lambert, Benjamin / Raj, Bhiksha / Singh, Rita (2013): "Discriminatively trained dependency language modeling for conversational speech recognition", In INTERSPEECH-2013, 3414-3418.