ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

End-to-End Cross-Lingual Spoken Language Understanding Model with Multilingual Pretraining

Xianwei Zhang, Liang He

The spoken language understanding (SLU) plays an essential role in the field of human-computer interaction. Most of the current SLU systems are cascade systems of automatic speech recognition (ASR) and natural language understanding (NLU). Error propagation and scarcity of annotated speech data are two common difficulties for resource-poor languages. To solve them, we propose a simple but effective end-to-end cross-lingual spoken language understanding model based on XLSR-53, which is a pretrained model in 53 languages by the Facebook research team. The end-to-end approach avoids error propagation and the multilingual pretraining reduces data annotation requirements. Our proposed method achieves 99.71% on the Fluent Speech Commands (FSC) English database and 79.89% on the CATSLU-MAP Chinese database, in intent classification accuracy. To the best of our knowledge, the former is the reported best result on the FSC database.


doi: 10.21437/Interspeech.2021-818

Cite as: Zhang, X., He, L. (2021) End-to-End Cross-Lingual Spoken Language Understanding Model with Multilingual Pretraining. Proc. Interspeech 2021, 4728-4732, doi: 10.21437/Interspeech.2021-818

@inproceedings{zhang21ha_interspeech,
  author={Xianwei Zhang and Liang He},
  title={{End-to-End Cross-Lingual Spoken Language Understanding Model with Multilingual Pretraining}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4728--4732},
  doi={10.21437/Interspeech.2021-818}
}