In a previous paper we proposed Web-based language models relying on the possibility theory. These models represent explicitely the possibility of word sequences. In this paper we propose to find the best way of combining this kind of model with classical probabilistic models, in the context of automatic speech recognition. We propose several combination approaches, depending on the nature of the combined models. In comparison with the baseline, the best combination provides an absolute word error rate reduction of about 1% on broadcast news transcription, and of 3.5% on domain-specific multimedia document transcription.
Bibliographic reference. Oger, Stanislas / Popescu, Vladimir / Linarès, Georges (2010): "Combination of probabilistic and possibilistic language models", In INTERSPEECH-2010, 1808-1811.