Recently, an extension to standard hidden Markov mod- els for speech recognition called Hidden Model Sequence (HMS) modelling was introduced. In this approach the relationship between phones used in a pronunciation dictionary and the HMMs used to model these in context is assumed to be stochastic. One important feature of the HMS framework is the ability to handle arbitrary model to phone sequence alignments. In this paper we try to exploit that capability by using two different methods to model sub-phone insertions and deletions. Experiments on the Resource Management (RM) corpus and a subset of the Switchboard corpus show that, relative to standard HMM baseline, a reduction word error rate (WER) of 24.3% relative can be obtained on RM and 2.4% absolute on Switchboard.
Cite as: Hain, T., Woodland, P.C. (2000) Modelling sub-phone insertions and deletions in continuous speech recognition. Proc. 6th International Conference on Spoken Language Processing (ICSLP 2000), vol. 4, 172-175, doi: 10.21437/ICSLP.2000-779
@inproceedings{hain00b_icslp, author={Thomas Hain and Philip C. Woodland}, title={{Modelling sub-phone insertions and deletions in continuous speech recognition}}, year=2000, booktitle={Proc. 6th International Conference on Spoken Language Processing (ICSLP 2000)}, pages={vol. 4, 172-175}, doi={10.21437/ICSLP.2000-779} }