EUROSPEECH '97

This paper studies algorithms for reducing the computational effort of the mixture density calculations in HMMbased speech recognition systems. These likelihood calculations take about 70 total recognition time in the RWTH system for large vocabulary continuous speech recognition. To reduce the computational cost of the likelihood calculations, we investigate several space partitioning methods. A detailed comparison of these techniques is given on the North American Business Corpus (NAB'94) for a 20 000 word task. As a result, the socalled projection search algorithm in combination with the VQ method reduces the cost of likelihood computation by a factor of about 8 with no significant loss in the word recognition accuracy.
Bibliographic reference. Ortmanns, Stefan / Firzlaff, Thorsten / Ney, Hermann (1997): "Fast likelihood computation methods for continuous mixture densities in large vocabulary speech recognition", In EUROSPEECH1997, 139142.