Improved Multilingual Training of Stacked Neural Network Acoustic Models for Low Resource Languages

Tanel Alumäe, Stavros Tsakalidis, Richard Schwartz


This paper proposes several improvements to multilingual training of neural network acoustic models for speech recognition and keyword spotting in the context of low-resource languages. We concentrate on the stacked architecture where the first network is used as a bottleneck feature extractor and the second network as the acoustic model. We propose to improve multilingual training when the amount of data from different languages is very different by applying balancing scalers to the training examples. We also explore how to exploit multilingual data to train the second neural network of the stacked architecture. An ensemble training method that can take advantage of both unsupervised pretraining as well as multilingual training is found to give the best speech recognition performance across a wide variety of languages, while system combination of differently trained multilingual models results in further improvements in keyword search performance.


DOI: 10.21437/Interspeech.2016-1426

Cite as

Alumäe, T., Tsakalidis, S., Schwartz, R. (2016) Improved Multilingual Training of Stacked Neural Network Acoustic Models for Low Resource Languages. Proc. Interspeech 2016, 3883-3887.

Bibtex
@inproceedings{Alumäe+2016,
author={Tanel Alumäe and Stavros Tsakalidis and Richard Schwartz},
title={Improved Multilingual Training of Stacked Neural Network Acoustic Models for Low Resource Languages},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-1426},
url={http://dx.doi.org/10.21437/Interspeech.2016-1426},
pages={3883--3887}
}