INTERSPEECH 2006 - ICSLP
Ninth International Conference on Spoken Language Processing

Pittsburgh, PA, USA
September 17-21, 2006

Multilingual Non-Native Speech Recognition Using Phonetic Confusion-Based Acoustic Model Modification and Graphemic Constraints

G. Bouselmi, D. Fohr, I. Illina, Jean-Paul Haton

LORIA, France

In this paper we present an automated approach for non-native speech recognition. We introduce a new phonetic confusion concept that associates sequences of native language (NL) phones to spoken language (SL) phones. Phonetic confusion rules are automatically extracted from a non-native speech database for a given NL and SL using both NLís and SLís ASR systems. These rules are used to modify the acoustic models (HMMs) of SLís ASR by adding acoustic models of NLís phones according to these rules. As pronunciation errors that non-native speakers produce depend on the writing of the words, we have also used graphemic constraints in the phonetic confusion extraction process. In the lexicon, the phones in wordsí pronunciations are linked to the corresponding graphemes (characters) of the word. In this way, the phonetic confusion is established between couples of (SL phones, graphemes) and sequences of NL phones. We evaluated our approach on French, Italian, Spanish and Greek non-native speech databases. The spoken language is English. The modified ASR system achieved significant improvements ranging from 20.3% to 43.2% (relative) in sentence error rate and from 26.6% to 50.0% in WER.

Full Paper

Bibliographic reference.  Bouselmi, G. / Fohr, D. / Illina, I. / Haton, Jean-Paul (2006): "Multilingual non-native speech recognition using phonetic confusion-based acoustic model modification and graphemic constraints", In INTERSPEECH-2006, paper 1569-Mon1BuP.2.