Exploring the Use of Significant Words Language Modeling for Spoken Document Retrieval

Ying-Wen Chen, Kuan-Yu Chen, Hsin-Min Wang, Berlin Chen


Owing to the rapid global access to tremendous amounts of multimedia associated with speech information on the Internet, spoken document retrieval (SDR) has become an emerging application recently. Apart from much effort devoted to developing robust indexing and modeling techniques for spoken documents, a recent line of research targets at enriching and reformulating query representations in an attempt to enhance retrieval effectiveness. In practice, pseudo-relevance feedback is by far the most prevalent paradigm for query reformulation, which assumes that top-ranked feedback documents obtained from the initial round of retrieval are potentially relevant and can be exploited to reformulate the original query. Continuing this line of research, the paper presents a novel modeling framework, which aims at discovering significant words occurring in the feedback documents, to infer an enhanced query language model for SDR. Formally, the proposed framework targets at extracting the essential words representing a common notion of relevance (i.e., the significant words which occur in almost all of the feedback documents), so as to deduce a new query language model that captures these significant words and meanwhile modulates the influence of both highly frequent words and too specific words. Experiments conducted on a benchmark SDR task demonstrate the performance merits of our proposed framework.


 DOI: 10.21437/Interspeech.2017-612

Cite as: Chen, Y., Chen, K., Wang, H., Chen, B. (2017) Exploring the Use of Significant Words Language Modeling for Spoken Document Retrieval. Proc. Interspeech 2017, 2889-2893, DOI: 10.21437/Interspeech.2017-612.


@inproceedings{Chen2017,
  author={Ying-Wen Chen and Kuan-Yu Chen and Hsin-Min Wang and Berlin Chen},
  title={Exploring the Use of Significant Words Language Modeling for Spoken Document Retrieval},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2889--2893},
  doi={10.21437/Interspeech.2017-612},
  url={http://dx.doi.org/10.21437/Interspeech.2017-612}
}