Most spoken document retrieval systems use the words derived from a large vocabulary speech recognizer as the internal representation for indexing the document. However, the use of recognition transcripts inherently limits the performance of the system since the size of the dictionary restricts the number of queries for which matches can be found. In this paper we present a new approach to this problem based on combining Probabilistic Latent Semantic Analysis (PLSA) with phonetic indexing. PLSA maps the words in documents and queries into a semantic space in which they can be compared even if they dont share any common words. Combining this semantic distance with acoustic scores gives an improvement of 6-11% relative for OOV queries and 4% relative for all queries on a 75 hour broadcast news indexing task.
Cite as: Logan, B., Prasangsit, P., Moreno, P. (2003) Fusion of semantic and acoustic approaches for spoken document retrieval. Proc. ISCA Workshop on Multilingual Spoken Document Retrieval (MSDR 2003), 1-6
@inproceedings{logan03_msdr, author={Beth Logan and Patrawadee Prasangsit and Pedro Moreno}, title={{Fusion of semantic and acoustic approaches for spoken document retrieval}}, year=2003, booktitle={Proc. ISCA Workshop on Multilingual Spoken Document Retrieval (MSDR 2003)}, pages={1--6} }