ISCA Archive Interspeech 2005
ISCA Archive Interspeech 2005

High-density discrete HMM with the use of scalar quantization indexing

Brian Mak, Siu-Kei Au Yeung, Yiu-Pong Lai, Manhung Siu

With the advance in semiconductor memory and the availability of very large speech corpora (of hundreds to thousands of hours of speech), we would like to revisit the use of discrete hidden Markov model (DHMM) in automatic speech recognition. To estimate the discrete density in a DHMM state, the acoustic space is divided into bins and one simply count the relative amount of observations falling into each bin. With a very large speech corpus, we believe that the number of bins may be greatly increased to get a much higher density than before, and we will call the new models, the high-density discrete hidden Markov model (HDDHMM). Our HDDHMM is different from traditional DHMM in two aspects: firstly, the codebook will have a size in thousands or even tens of thousands; secondly, we propose a method based on scalar quantization indexing so that for a d-dimensional acoustic vector, the discrete codeword can be determined in O(d) time. During recognition, the state probability is reduced to an O(1) table look-up. The new HDDHMM was tested on WSJ0 with 5K vocabulary. Compared with a baseline 4-stream continuous density HMM system which has a WER of 9.71%, a 4-stream HDDHMM system converted from the former achieves a WER of 11.60%, with no distance or Gaussian computation.

doi: 10.21437/Interspeech.2005-690

Cite as: Mak, B., Yeung, S.-K.A., Lai, Y.-P., Siu, M. (2005) High-density discrete HMM with the use of scalar quantization indexing. Proc. Interspeech 2005, 2121-2124, doi: 10.21437/Interspeech.2005-690

  author={Brian Mak and Siu-Kei Au Yeung and Yiu-Pong Lai and Manhung Siu},
  title={{High-density discrete HMM with the use of scalar quantization indexing}},
  booktitle={Proc. Interspeech 2005},