ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Efficient Weight Factorization for Multilingual Speech Recognition

Ngoc-Quan Pham, Tuan-Nam Nguyen, Sebastian Stüker, Alex Waibel

End-to-end multilingual speech recognition involves using a single model training on a compositional speech corpus including many languages, resulting in a single neural network to handle transcribing different languages. Due to the fact that each language in the training data has different characteristics, the shared network may struggle to optimize for all various languages simultaneously. In this paper we propose a novel multilingual architecture that targets the core operation in neural networks: linear transformation functions. The key idea of the method is to assign fast weight matrices for each language by decomposing each weight matrix into a shared component and a language dependent component. The latter is then factorized into vectors using rank-1 assumptions to reduce the number of parameters per language. This efficient factorization scheme is proved to be effective in two multilingual settings with 7 and 27 languages, reducing the word error rates by 26% and 27% rel. for two popular architectures LSTM and Transformer, respectively.

doi: 10.21437/Interspeech.2021-216

Cite as: Pham, N.-Q., Nguyen, T.-N., Stüker, S., Waibel, A. (2021) Efficient Weight Factorization for Multilingual Speech Recognition. Proc. Interspeech 2021, 2421-2425, doi: 10.21437/Interspeech.2021-216

  author={Ngoc-Quan Pham and Tuan-Nam Nguyen and Sebastian Stüker and Alex Waibel},
  title={{Efficient Weight Factorization for Multilingual Speech Recognition}},
  booktitle={Proc. Interspeech 2021},