Imperfect speech recognition often leads to degraded performance when leveraging existing text-based methods for speech summarization. To alleviate this problem, this paper investigates various ways to robustly represent the recognition hypotheses of spoken documents beyond the top scoring ones. Moreover, a new summarization method stemming from the Kullback-Leibler (KL) divergence measure and exploring both the sentence and document relevance information is proposed to work with such robust representations. Experiments on broadcast news speech summarization seem to demonstrate the utility of the presented approaches.
Bibliographic reference. Lin, Shih-Hsiang / Chen, Berlin (2009): "Improved speech summarization with multiple-hypothesis representations and kullback-leibler divergence measures", In INTERSPEECH-2009, 1847-1850.