Automatic Speech Recognition and Topic Identification from Speech for Almost-Zero-Resource Languages

Matthew Wiesner, Chunxi Liu, Lucas Ondel, Craig Harman, Vimal Manohar, Jan Trmal, Zhongqiang Huang, Najim Dehak, Sanjeev Khudanpur


Automatic speech recognition (ASR) systems often need to be developed for extremely low-resource languages to serve end-uses such as audio content categorization and search. While universal phone recognition is natural to consider when no transcribed speech is available to train an ASR system in a language, adapting universal phone models using very small amounts (minutes rather than hours) of transcribed speech also needs to be studied, particularly with state-of-the-art DNN-based acoustic models. The DARPA LORELEI program provides a framework for such very-low-resource ASR studies and provides an extrinsic metric for evaluating ASR performance in a humanitarian assistance, disaster relief setting. This paper presents our Kaldi-based systems for the program, which employ a universal phone modeling approach to ASR and describes recipes for very rapid adaptation of this universal ASR system. The results we obtain significantly outperform results obtained by many competing approaches on the NIST LoReHLT 2017 Evaluation datasets.


 DOI: 10.21437/Interspeech.2018-1836

Cite as: Wiesner, M., Liu, C., Ondel, L., Harman, C., Manohar, V., Trmal, J., Huang, Z., Dehak, N., Khudanpur, S. (2018) Automatic Speech Recognition and Topic Identification from Speech for Almost-Zero-Resource Languages. Proc. Interspeech 2018, 2052-2056, DOI: 10.21437/Interspeech.2018-1836.


@inproceedings{Wiesner2018,
  author={Matthew Wiesner and Chunxi Liu and Lucas Ondel and Craig Harman and Vimal Manohar and Jan Trmal and Zhongqiang Huang and Najim Dehak and Sanjeev Khudanpur},
  title={Automatic Speech Recognition and Topic Identification from Speech for Almost-Zero-Resource Languages},
  year=2018,
  booktitle={Proc. Interspeech 2018},
  pages={2052--2056},
  doi={10.21437/Interspeech.2018-1836},
  url={http://dx.doi.org/10.21437/Interspeech.2018-1836}
}