We want to enable users to locate desired information in spoken audio documents using not only the words, but also dialog activities. Following previous research, we infer this information from prosodic features, however, instead of retrieval by matching to a predefined finite set of activities, we estimate similarity using a vector space representation. Utterances close in this vector space are frequently similar not only pragmatically, but also topically. Using this we implemented a dialog-based query-by-example function and built it into an interface for use in combination with normal lexical search. In an experiment searchers used the new feature and considered it helpful, but only for some search tasks.
Bibliographic reference. Ward, Nigel G. / Werner, Steven D. (2013): "Using dialog-activity similarity for spoken information retrieval", In INTERSPEECH-2013, 1569-1573.