Experiments in Character-Level Neural Network Models for Punctuation

William Gale, Sarangarajan Parthasarathy

We explore character-level neural network models for inferring punctuation from text-only input. Punctuation inference is treated as a sequence tagging problem where the input is a sequence of un-punctuated characters, and the output is a corresponding sequence of punctuation tags. We experiment with six architectures, all of which use a long short-term memory (LSTM) network for sequence modeling. They differ in the way the context and lookahead for a given character is derived: from simple character embedding and delayed output to enable lookahead, to complex convolutional neural networks (CNN) to capture context. We demonstrate that the accuracy of proposed character-level models are competitive with the accuracy of a state-of-the-art word-level Conditional Random Field (CRF) baseline with carefully crafted features.

 DOI: 10.21437/Interspeech.2017-1710

Cite as: Gale, W., Parthasarathy, S. (2017) Experiments in Character-Level Neural Network Models for Punctuation. Proc. Interspeech 2017, 2794-2798, DOI: 10.21437/Interspeech.2017-1710.

  author={William Gale and Sarangarajan Parthasarathy},
  title={Experiments in Character-Level Neural Network Models for Punctuation},
  booktitle={Proc. Interspeech 2017},