Who Needs Words? Lexicon-Free Speech Recognition

Tatiana Likhomanenko, Gabriel Synnaeve, Ronan Collobert


Lexicon-free speech recognition naturally deals with the problem of out-of-vocabulary (OOV) words. In this paper, we show that character-based language models (LM) can perform as well as word-based LMs for speech recognition, in word error rates (WER), even without restricting the decoding to a lexicon. We study character-based LMs and show that convolutional LMs can effectively leverage large (character) contexts, which is key for good speech recognition performance downstream. We specifically show that the lexicon-free decoding performance (WER) on utterances with OOV words using character-based LMs is better than lexicon-based decoding, both with character or word-based LMs.


 DOI: 10.21437/Interspeech.2019-3107

Cite as: Likhomanenko, T., Synnaeve, G., Collobert, R. (2019) Who Needs Words? Lexicon-Free Speech Recognition. Proc. Interspeech 2019, 3915-3919, DOI: 10.21437/Interspeech.2019-3107.


@inproceedings{Likhomanenko2019,
  author={Tatiana Likhomanenko and Gabriel Synnaeve and Ronan Collobert},
  title={{Who Needs Words? Lexicon-Free Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3915--3919},
  doi={10.21437/Interspeech.2019-3107},
  url={http://dx.doi.org/10.21437/Interspeech.2019-3107}
}