![]() |
INTERSPEECH 2011
|
![]() |
Spoken document retrieval (SDR) has emerged as an active area of research in the speech processing community. The fundamental problems facing SDR are generally three-fold: 1) a query is often only a vague expression of an underlying information need, 2) there probably would be word usage mismatch between a query and a spoken document even if they are topically related to each other, and 3) the imperfect speech recognition transcript carries wrong information and thus deviates somewhat from representing the true theme of a spoken document. To mitigate the above problems, in this paper, we study a novel use of a relevance language modeling framework for SDR. It not only inherits the merits of several existing techniques but also provides a flexible but systematic way to render the lexical and topical relationships between a query and a spoken document. Moreover, we also investigate representing the query and documents with different granularities of index features to work in conjunction with the various relevance cues. Experiments conducted on the TDT SDR task show promise of the methods deduced from our retrieval framework when compared with a few existing retrieval methods.
Bibliographic reference. Chen, Pei-Ning / Chen, Kuan-Yu / Chen, Berlin (2011): "Leveraging relevance cues for improved spoken document retrieval", In INTERSPEECH-2011, 929-932.