FarSpeech: Arabic Natural Language Processing for Live Arabic Speech

Mohamed Eldesouki, Naassih Gopee, Ahmed Ali, Kareem Darwish


This paper presents FarSpeech, QCRI’s combined Arabic speech recognition, natural language processing (NLP), and dialect identification pipeline. It features modern web technologies to capture live audio, transcribes Arabic audio, NLP processes the transcripts, and identifies the dialect of the speaker. For transcription, we use QATS, which is a Kaldi-based ASR system that uses Time Delay Neural Networks (TDNN). For NLP, we use a SOTA Arabic NLP toolkit that employs various deep neural network and SVM based models. Finally, our dialect identification system uses multi-modality from both acoustic and linguistic input. FarSpeech1 presents different screens to display the transcripts, text segmentation, part-of-speech tags, recognized named entities, diacritized text, and the identified dialect of the speech.


Cite as: Eldesouki, M., Gopee, N., Ali, A., Darwish, K. (2019) FarSpeech: Arabic Natural Language Processing for Live Arabic Speech. Proc. Interspeech 2019, 2372-2373.


@inproceedings{Eldesouki2019,
  author={Mohamed Eldesouki and Naassih Gopee and Ahmed Ali and Kareem Darwish},
  title={{FarSpeech: Arabic Natural Language Processing for Live Arabic Speech}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2372--2373}
}