Neural Named Entity Recognition from Subword Units

Abdalghani Abujabal, Judith Gaspers


Named entity recognition (NER) is a vital task in spoken language understanding, which aims to identify mentions of named entities in text e.g., from transcribed speech. Existing neural models for NER rely mostly on dedicated word-level representations, which suffer from two main shortcomings. First, the vocabulary size is large, yielding large memory requirements and training time. Second, these models are not able to learn morphological or phonological representations. To remedy the above shortcomings, we adopt a neural solution based on bidirectional LSTMs and conditional random fields, where we rely on subword units, namely characters, phonemes, and bytes. For each word in an utterance, our model learns a representation from each of the subword units. We conducted experiments in a real-world large-scale setting for the use case of a voice-controlled device covering four languages with up to 5.5M utterances per language. Our experiments show that (1) with increasing training data, performance of models trained solely on subword units becomes closer to that of models with dedicated word-level embeddings (91.35 vs 93.92 F1 for English), while using a much smaller vocabulary size (332 vs 74K), (2) subword units enhance models with dedicated word-level embeddings, and (3) combining different subword units improves performance.


 DOI: 10.21437/Interspeech.2019-1305

Cite as: Abujabal, A., Gaspers, J. (2019) Neural Named Entity Recognition from Subword Units. Proc. Interspeech 2019, 2663-2667, DOI: 10.21437/Interspeech.2019-1305.


@inproceedings{Abujabal2019,
  author={Abdalghani Abujabal and Judith Gaspers},
  title={{Neural Named Entity Recognition from Subword Units}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={2663--2667},
  doi={10.21437/Interspeech.2019-1305},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1305}
}