On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition

Kazuki Irie, Rohit Prabhavalkar, Anjuli Kannan, Antoine Bruguier, David Rybach, Patrick Nguyen


In conventional speech recognition, phoneme-based models outperform grapheme-based models for non-phonetic languages such as English. The performance gap between the two typically reduces as the amount of training data is increased. In this work, we examine the impact of the choice of modeling unit for attention-based encoder-decoder models. We conduct experiments on the LibriSpeech 100hr, 460hr, and 960hr tasks, using various target units (phoneme, grapheme, and word-piece); across all tasks, we find that grapheme or word-piece models consistently outperform phoneme-based models, even though they are evaluated without a lexicon or an external language model. We also investigate model complementarity: we find that we can improve WERs by up to 9% relative by rescoring N-best lists generated from a strong word-piece based baseline with either the phoneme or the grapheme model. Rescoring an N-best list generated by the phonemic system, however, provides limited improvements. Further analysis shows that the word-piece-based models produce more diverse N-best hypotheses, and thus lower oracle WERs, than phonemic models.


 DOI: 10.21437/Interspeech.2019-2277

Cite as: Irie, K., Prabhavalkar, R., Kannan, A., Bruguier, A., Rybach, D., Nguyen, P. (2019) On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition. Proc. Interspeech 2019, 3800-3804, DOI: 10.21437/Interspeech.2019-2277.


@inproceedings{Irie2019,
  author={Kazuki Irie and Rohit Prabhavalkar and Anjuli Kannan and Antoine Bruguier and David Rybach and Patrick Nguyen},
  title={{On the Choice of Modeling Unit for Sequence-to-Sequence Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={3800--3804},
  doi={10.21437/Interspeech.2019-2277},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2277}
}