In this paper we show that there is measurable information in the articulatory system which can help to disambiguate the acoustic signal. We measure directly the movement of the lips, tongue, jaw, velum and larynx and parameterise this articulatory feature space using principal components analysis. The parameterisation is developed and evaluated using a speaker dependent phone recognition task on a specially recorded TIMIT corpus of 460 sentences. The results show that there is useful supplementary information contained in the articulatory data which yields a small but significant improvement in phone recognition accuracy of 2%. However, preliminary attempts to estimate the articulatory data from the acoustic signal and use this to supplement the acoustic input have not yielded any significant improvement in phone accuracy.
Cite as: Wrench, A.A., Richmond, K. (2000) Continuous speech recognition using articulatory data. Proc. 6th International Conference on Spoken Language Processing (ICSLP 2000), vol. 4, 145-148, doi: 10.21437/ICSLP.2000-772
@inproceedings{wrench00_icslp, author={Alan A. Wrench and Korin Richmond}, title={{Continuous speech recognition using articulatory data}}, year=2000, booktitle={Proc. 6th International Conference on Spoken Language Processing (ICSLP 2000)}, pages={vol. 4, 145-148}, doi={10.21437/ICSLP.2000-772} }