Noisy BiLSTM-Based Models for Disfluency Detection

Nguyen Bach, Fei Huang


This paper describes BiLSTM-based models to disfluency detection in speech transcripts using residual BiLSTM blocks, self-attention, and noisy training approach. Our best model not only surpasses BERT in 4 non-Switchboard test sets, but also is 20 times smaller than the BERT-based model [1]. Thus, we demonstrate that strong performance can be achieved without extensively use of very large training data. In addition, we show that it is possible to be robust across data sets with noisy training approach in which we found insertion is the most useful noise for augmenting training data.


 DOI: 10.21437/Interspeech.2019-1336

Cite as: Bach, N., Huang, F. (2019) Noisy BiLSTM-Based Models for Disfluency Detection. Proc. Interspeech 2019, 4230-4234, DOI: 10.21437/Interspeech.2019-1336.


@inproceedings{Bach2019,
  author={Nguyen Bach and Fei Huang},
  title={{Noisy BiLSTM-Based Models for Disfluency Detection}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4230--4234},
  doi={10.21437/Interspeech.2019-1336},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1336}
}