INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Adapting Machine Translation Models Toward Misrecognized Speech with Text-to-Speech Pronunciation Rules and Acoustic Confusability

Nicholas Ruiz (1), Qin Gao (2), William Lewis (2), Marcello Federico (1)

(1) FBK, Italy
(2) Microsoft, USA

In the spoken language translation pipeline, machine translation systems that are trained solely on written bitexts are often unable to recover from speech recognition errors due to the mismatch in training data. We propose a novel technique to simulate the errors generated by an ASR system, using the ASR system's pronunciation dictionary and language model. Lexical entries in the pronunciation dictionary are converted into phoneme sequences using a text-to-speech (TTS) analyzer and stored in a phoneme-to-word translation model. The translation model and ASR language model are combined into a phoneme-to-word MT system that “damages” clean texts to look like ASR outputs based on acoustic confusions. Training texts are TTS-converted and damaged into synthetic ASR data for use as adaptation data for training a speech translation system. Our proposed technique yields consistent improvements in translation quality on English-French lectures.

Full Paper

Bibliographic reference.  Ruiz, Nicholas / Gao, Qin / Lewis, William / Federico, Marcello (2015): "Adapting machine translation models toward misrecognized speech with text-to-speech pronunciation rules and acoustic confusability", In INTERSPEECH-2015, 2247-2251.