INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Discriminatively Trained Dependency Language Modeling for Conversational Speech Recognition

Benjamin Lambert, Bhiksha Raj, Rita Singh

Carnegie Mellon University, USA

We present a discriminatively trained dependency parser-based language model. The model operates on utterances, rather than words, and so can utilize long-distance structural features of each sentence. We train the model discriminatively on n-best lists, using the perceptron algorithm to tune the model weights. Our features include standard n-gram style features, long-distance co-occurrence features, and syntactic structural features. We evaluate this model by re-ranking n-best lists of recognized speech from the Fisher dataset of informal telephone conversations. We compare various combinations of feature types, and methods of training the model.

Full Paper

Bibliographic reference.  Lambert, Benjamin / Raj, Bhiksha / Singh, Rita (2013): "Discriminatively trained dependency language modeling for conversational speech recognition", In INTERSPEECH-2013, 3414-3418.