2nd Workshop on Spoken Language Technologies for Under-Resourced Languages

Universiti Sains, Penang, Malaysia
May 3-5, 2010

Pooling ASR Data for Closely Related Languages

Charl van Heerden, Neil Kleynhans, Etienne Barnard, Marelie Davel

CSIR, South Africa

We describe several experiments that were conducted to assess the viability of data pooling as a means to improve speech-recognition performance for under-resourced languages. Two groups of closely related languages from the Southern Bantu language family were studied, and our tests involved phoneme recognition on telephone speech using standard tied-triphone Hidden Markov Models. Approximately 6 to 11 hours of speech from around 170 speakers was available for training in each language. We find that useful improvements in recognition accuracy can be achieved when pooling data from languages that are highly similar, with two hours of data from a closely related language being approximately equivalent to one hour of data from the target language in the best case. However, the benefit decreases rapidly as languages become slightly more distant, and is also expected to decrease when larger corpora are available. Our results suggest that similarities in triphone frequencies are the most accurate predictor of the performance of language pooling in the conditions studied here.

Index Terms: speech recognition, under-resourced languages, data pooling

Full Paper

Bibliographic reference.  Heerden, Charl van / Kleynhans, Neil / Barnard, Etienne / Davel, Marelie (2010): "Pooling ASR data for closely related languages", In SLTU-2010, 17-23.