Building an ASR System for Mboshi Using A Cross-Language Definition of Acoustic Units Approach

Odette Scharenborg, Patrick Ebel, Mark Hasegawa-Johnson, Najim Dehak


´╗┐For many languages in the world, not enough (annotated) speech data is available to train an ASR system. Recently, we proposed a cross-language method for training an ASR system using linguistic knowledge and semi-supervised training. Here, we apply this approach to the low-resource language Mboshi. Using an ASR system trained on Dutch, Mboshi acoustic units were first created using cross-language initialization of the phoneme vectors in the output layer. Subsequently, this adapted system was retrained using Mboshi self-labels. Two training methods were investigated: retraining of only the output layer and retraining the full deep neural network (DNN). The resulting Mboshi system was analyzed by investigating per phoneme accuracies, phoneme confusions, and by visualizing the hidden layers of the DNNs prior to and following retraining with the self-labels. Results showed a fairly similar performance for the two training methods but a better phoneme representation for the fully retrained DNN.


 DOI: 10.21437/SLTU.2018-35

Cite as: Scharenborg, O., Ebel, P., Hasegawa-Johnson, M., Dehak, N. (2018) Building an ASR System for Mboshi Using A Cross-Language Definition of Acoustic Units Approach. Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages, 167-171, DOI: 10.21437/SLTU.2018-35.


@inproceedings{Scharenborg2018,
  author={Odette Scharenborg and Patrick Ebel and Mark Hasegawa-Johnson and Najim Dehak},
  title={{Building an ASR System for Mboshi Using A Cross-Language Definition of Acoustic Units Approach}},
  year=2018,
  booktitle={Proc. The 6th Intl. Workshop on Spoken Language Technologies for Under-Resourced Languages},
  pages={167--171},
  doi={10.21437/SLTU.2018-35},
  url={http://dx.doi.org/10.21437/SLTU.2018-35}
}