9th Annual Conference of the International Speech Communication Association

Brisbane, Australia
September 22-26, 2008

Continuous Phone Recognition Without Target Language Training Data

Dau-Cheng Lyu (1), Sabato Marco Siniscalchi (2), Tae-Yoon Kim (3), Chin-Hui Lee (3)

(1) Chang Gung University, Taiwan; (2) NTNU, Norway; (3) Georgia Institute of Technology, USA

Designing an automatic speech recognition system with little or no language-specific training data is a challenging research topic because collecting abundant speech training data is not always an easy job for all possible languages of interest. According to our previous studied detection-based paradigm, we used a set of 21 acoustic phonetic attributes shared by five languages to perform Japanese phone recognition without using any Japanese speech training data. In this paper, we address the key issue of designing attribute-to-phone mapping models by two techniques: (1) a phone-based background model for each of the speech attribute detector to improve attribute detection; and (2) a data-driven clustering algorithm to group attribute-to-phone mapping rules of known languages to predict such rules for target phones in an unseen language. We report on experimental results of continuous Japanese phone recognition with the OGI Multilingual Speech Corpus and show that the proposed approach indeed decreases the false rejection rate of attribute detection, and improves the phone recognition accuracy.

Full Paper

Bibliographic reference.  Lyu, Dau-Cheng / Siniscalchi, Sabato Marco / Kim, Tae-Yoon / Lee, Chin-Hui (2008): "Continuous phone recognition without target language training data", In INTERSPEECH-2008, 2687-2690.