INTERSPEECH 2014
15th Annual Conference of the International Speech Communication Association

Singapore
September 14-18, 2014

Combination of FST and CN Search in Spoken Term Detection

Justin Chiu (1), Yun Wang (1), Jan Trmal (2), Daniel Povey (2), Guoguo Chen (2), Alexander I. Rudnicky (1)

(1) Carnegie Mellon University, USA
(2) Johns Hopkins University, USA

Spoken Term Detection (STD) focuses on finding instances of a particular spoken word or phrase in an audio corpus. Most STD systems have a two-step pipeline, ASR followed by search. Two approaches to search are common, Confusion Network (CN) based search and Finite State Transducer (FST) based search. In this paper, we examine combination of these two different search approaches, using the same ASR output. We find that the CN search performs better on shorter queries, and FST search performs better on longer queries. By combining the different search results from the same ASR decoding, we achieve better performance compared to either search approach on its own. We also find that this improvement is additive to the usual combination of decoder results using different modeling techniques.

Full Paper

Bibliographic reference.  Chiu, Justin / Wang, Yun / Trmal, Jan / Povey, Daniel / Chen, Guoguo / Rudnicky, Alexander I. (2014): "Combination of FST and CN search in spoken term detection", In INTERSPEECH-2014, 2784-2788.