Interspeech'2005 - Eurospeech
This paper presents a fully automated approach for the recognition of non-native speech based on acoustic model modification. For a native language (L1) and a spoken language (L2), pronunciation variants of the phones of L2 are automatically extracted from an existing non-native database as a confusion matrix with sequences of phones of L1. This is done using L1's and L2's ASR systems. This confusion concept deals with the problem of non existence of match between some L2 and L1 phones. The confusion matrix is then used to modify the acoustic models (HMMs) of L2 phones by integrating corresponding L1 phone models as alternative HMM paths. In this way, no lexicon modification is carried. The modified ASR system achieved an improvement between 32% and 40% (relative, L1=French and L2=English) in WER on the French non-native database used for testing.
Bibliographic reference. Bouselmi, Ghazi / Fohr, Dominique / Illina, Irina / Haton, Jean-Paul (2005): "Fully automated non-native speech recognition using confusion-based acoustic model integration", In INTERSPEECH-2005, 1369-1372.