We improve the automatic speech recognition of broadcast news using paradigms from Web 2.0 to obtain time- and topic-relevant text data for language modeling. We elaborate an unsupervised text collection and decoding strategy that includes crawling appropriate texts from RSS Feeds, complementing it with texts from Twitter, language model and vocabulary adaptation, as well as a 2-pass decoding. The word error rates of the tested French broadcast news shows from Europe 1 are reduced by almost 32% relative with an underlying language model from the GlobalPhone project and by almost 4% with an underlying language model from the Quaero project. The tools that we use for the text normalization, the collection of RSS Feeds together with the text on the related websites, a TF-IDF-based topic words extraction, as well as the opportunity for language model interpolation are available in our Rapid Language Adaptation Toolkit.
Bibliographic reference. Schlippe, Tim / Gren, Lukasz / Vu, Ngoc Thang / Schultz, Tanja (2013): "Unsupervised language model adaptation for automatic speech recognition of broadcast news using web 2.0", In INTERSPEECH-2013, 2698-2702.