Multilingual Data Selection for Low Resource Speech Recognition

Samuel Thomas, Kartik Audhkhasi, Jia Cui, Brian Kingsbury, Bhuvana Ramabhadran


Feature representations extracted from deep neural network-based multilingual frontends provide significant improvements to speech recognition systems in low resource settings. To effectively train these frontends, we introduce a data selection technique that discovers language groups from an available set of training languages. This data selection method reduces the required amount of training data and training time by approximately 40%, with minimal performance degradation. We present speech recognition results on 7 very limited language pack (VLLP) languages from the second option period of the IARPA Babel program using multilingual features trained on up to 10 languages. The proposed multilingual features provide up to 15% relative improvement over baseline acoustic features on the VLLP languages.


DOI: 10.21437/Interspeech.2016-598

Cite as

Thomas, S., Audhkhasi, K., Cui, J., Kingsbury, B., Ramabhadran, B. (2016) Multilingual Data Selection for Low Resource Speech Recognition. Proc. Interspeech 2016, 3853-3857.

Bibtex
@inproceedings{Thomas+2016,
author={Samuel Thomas and Kartik Audhkhasi and Jia Cui and Brian Kingsbury and Bhuvana Ramabhadran},
title={Multilingual Data Selection for Low Resource Speech Recognition},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-598},
url={http://dx.doi.org/10.21437/Interspeech.2016-598},
pages={3853--3857}
}