LSTM-Based NeuroCRFs for Named Entity Recognition

Marc-Antoine Rondeau, Yi Su


Although NeuroCRF, an augmented Conditional Random Fields (CRF) model whose feature function is parameterized as a Feed-Forward Neural Network (FF NN) on word embeddings, has soundly outperformed traditional linear-chain CRF on many sequence labeling tasks, it is held back by the fact that FF NNs have a fixed input length and therefore cannot take advantage of the full input sentence. We propose to address this issue by replacing the FF NN with a Long Short-Term Memory (LSTM) NN, which can summarize an input of arbitrary length into a fixed dimension representation. The resulting model obtains F1=89.28 on WikiNER dataset, a significant improvement over the NeuroCRF baseline’s F1=87.58, which is already a highly competitive result.


DOI: 10.21437/Interspeech.2016-288

Cite as

Rondeau, M., Su, Y. (2016) LSTM-Based NeuroCRFs for Named Entity Recognition. Proc. Interspeech 2016, 665-669.

Bibtex
@inproceedings{Rondeau+2016,
author={Marc-Antoine Rondeau and Yi Su},
title={LSTM-Based NeuroCRFs for Named Entity Recognition},
year=2016,
booktitle={Interspeech 2016},
doi={10.21437/Interspeech.2016-288},
url={http://dx.doi.org/10.21437/Interspeech.2016-288},
pages={665--669}
}