The practice of cross-lingual transfer of speech technology is of increasing concern as the demand for recognition systems in multiple languages grows. Previous work has proved that: firstly, if there is a big difference between the source language and the target language, cross-lingual adaptation may not outperform scratch training even if the training set is limited; secondly, cross-lingual seed models achieve lower word error rates than flat starts or random models. Based on these two points, this paper improved the approach of generating cross-lingual seed models through two ways: (1) using a combination of cross-lingual seed models and "flat-start" models, and (2) using phoneme mappings on an HMM state level in a new language to reduce mismatched coarticulation. All simulations are carried out with limited available training data, and the effectiveness of the approach was proved using English-Mandarin as a test case.
Bibliographic reference. Zhao, Xufang / O'Shaughnessy, Douglas (2008): "Seed models combination and state level mappings of cross-lingual transfer for rapid HMM development: from English to Mandarin", In INTERSPEECH-2008, 2699-2702.