Hybrid methods which combine hidden Markov models (HMMs) and connectionist techniques take advantage of what are believed to be the strong points of each of the two approaches: the powerful discrimination-based learning of connectionist networks and the time-alignment capability of HMMs. Connectionist Viterbi Training (CVT) is a simple variation of Viterbi training which uses a back-propagation network to represent the output distributions associated with the transitions in the HMM. The work reported here represents the culmination of three years of investigation of various means by which HMMs and neural networks (NNs) can be combined for continuous speech recognition. This paper describes the CVT procedure, discusses the factors most important to its design and reports its recognition performance. Several changes made to the system over the past year are reported here, including: (1) the change from recurrent to non-recurrent NNs, (2) the change from SPHlNX-style phone-based HMMs to word-based HMMS, (3) the addition of a corrective training procedure, and (3) the addition of an alternate model for every word. The CVT system, incorporating these changes, achieves 99. 1% word accuracy and 98. 0% string accuracy on the TI/NBS Connected Digits task ("TI Digits"). Keywords: hybrid systems, neural networks, back propagation, TI Digits, viterbi training, Connectionist Viterbi Training, CVT
Bibliographic reference. Franzini, Michael A. / Waibel, Alex H. / Lee, Kai-Fu (1991): "Recent work in continuous speech recognition using the connectionist viterbi training procedure", In EUROSPEECH-1991, 1213-1216.