Interspeech'2005 - Eurospeech

Lisbon, Portugal
September 4-8, 2005

Acoustic and Phonetic Confusions in Accented Speech Recognition

Yi Liu, Pascale Fung

Hong Kong University of Science & Technology, China

Accented speech recognition is more challenging than standard speech recognition due to the effects of phonetic and acoustic confusions. Phonetic confusion in accented speech occurs when an expected phone is pronounced as a different one, which leads to erroneous recognition. Acoustic confusion occurs when the pronounced phone is found to lie acoustically between two baseform models and can be equally recognized as either one. We propose that it is necessary to analyze and model these confusions separately in order to improve accented speech recognition without degrading standard speech recognition. We propose using likelihood ratio test to measure phonetic confusion, and asymmetric acoustic distance to measure acoustic confusion. Only accent-specific phonetic units with low acoustic confusion are used in an augmented pronunciation dictionary, while phonetic models with high acoustic confusion are reconstructed using decision tree merging. Experimental results show that our approach is effective and superior to methods modeling phonetic confusion or acoustic confusion alone in accented speech, with a significant 5.7% absolute WER reduction, without degrading standard speech recognition.

Full Paper

Bibliographic reference.  Liu, Yi / Fung, Pascale (2005): "Acoustic and phonetic confusions in accented speech recognition", In INTERSPEECH-2005, 3033-3036.