INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Discriminative Training of WFST Factors with Application to Pronunciation Modeling

Preethi Jyothi (1), Eric Fosler-Lussier (1), Karen Livescu (2)

(1) Ohio State University, USA
(2) Toyota Technological Institute at Chicago, USA

One of the most popular speech recognition architectures consists of multiple components (like the acoustic, pronunciation and language models) that are modeled as weighted finite state transducer (WFST) factors in a cascade. These factor WFSTs are typically trained in isolation and combined efficiently for decoding. Recent work has explored jointly estimating parameters for these models using considerable amounts of training data. We propose an alternative approach to selectively train factor WFSTs in such an architecture, while still leveraging information from the entire cascade. This technique allows us to effectively estimate parameters of a factor WFST using relatively small amounts of data, if the factor is small. Our approach involves an online training paradigm for linear models adapted for discriminatively training one or more WFSTs in a cascade. We apply this method to train a pronunciation model for recognition on conversational speech, resulting in significant improvements in recognition performance over the baseline model.

Full Paper

Bibliographic reference.  Jyothi, Preethi / Fosler-Lussier, Eric / Livescu, Karen (2013): "Discriminative training of WFST factors with application to pronunciation modeling", In INTERSPEECH-2013, 1961-1965.