INTERSPEECH 2004 - ICSLP
Large vocabulary continuous speech recognition systems are known to be computationally intensive. A major bottleneck is the Gaussian mixture model (GMM) computation and various techniques have been proposed to address this problem. We present a systematic study of fast GMM computation techniques. As there are a large number of these and it is impractical to exhaustively evaluate all of them, we first categorized techniques into four layers and selected representative ones to evaluate in each layer. Based on this framework of study, we provide a detailed analysis and comparison of GMM computation techniques from the four-layer perspective and explore two subtle practical issues, 1) how different techniques can be combined effectively and 2) how beam pruning will affect the performance of GMM computation techniques. All techniques are evaluated in the CMU Communicator domain. We also compare their performance with others reported in the literature.
Bibliographic reference. Chan, Arthur / Mosur, Ravishankar / Rudnicky, Alexander / Sherwani, Jahanzeb (2004): "Four-layer categorization scheme of fast GMM computation techniques in large vocabulary continuous speech recognition systems", In INTERSPEECH-2004, 689-692.