Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model

Anjuli Kannan, Arindrima Datta, Tara N. Sainath, Eugene Weinstein, Bhuvana Ramabhadran, Yonghui Wu, Ankur Bapna, Zhifeng Chen, Seungji Lee


Multilingual end-to-end (E2E) models have shown great promise in expansion of automatic speech recognition (ASR) coverage of the world’s languages. They have shown improvement over monolingual systems, and have simplified training and serving by eliminating language-specific acoustic, pronunciation, and language models. This work presents an E2E multilingual system which is equipped to operate in low-latency interactive applications, as well as handle a key challenge of real world data: the imbalance in training data across languages. Using nine Indic languages, we compare a variety of techniques, and find that a combination of conditioning on a language vector and training language-specific adapter layers produces the best model. The resulting E2E multilingual model achieves a lower word error rate (WER) than both monolingual E2E models (eight of nine languages) and monolingual conventional systems (all nine languages).


 DOI: 10.21437/Interspeech.2019-2858

Cite as: Kannan, A., Datta, A., Sainath, T.N., Weinstein, E., Ramabhadran, B., Wu, Y., Bapna, A., Chen, Z., Lee, S. (2019) Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model. Proc. Interspeech 2019, 2130-2134, DOI: 10.21437/Interspeech.2019-2858.


@inproceedings{Kannan2019,
  author={Anjuli Kannan and Arindrima Datta and Tara N. Sainath and Eugene Weinstein and Bhuvana Ramabhadran and Yonghui Wu and Ankur Bapna and Zhifeng Chen and Seungji Lee},
  title={{Large-Scale Multilingual Speech Recognition with a Streaming End-to-End Model}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2130--2134},
  doi={10.21437/Interspeech.2019-2858},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2858}
}