Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition

Yonatan Belinkov, Ahmed Ali, James Glass


End-to-end neural network systems for automatic speech recognition (ASR) are trained from acoustic features to text transcriptions. In contrast to modular ASR systems, which contain separately-trained components for acoustic modeling, pronunciation lexicon, and language modeling, the end-to-end paradigm is both conceptually simpler and has the potential benefit of training the entire system on the end task. However, such neural network models are more opaque: it is not clear how to interpret the role of different parts of the network and what information it learns during training. In this paper, we analyze the learned internal representations in an end-to-end ASR model. We evaluate the representation quality in terms of several classification tasks, comparing phonemes and graphemes, as well as different articulatory features. We study two languages (English and Arabic) and three datasets, finding remarkable consistency in how different properties are represented in different layers of the deep neural network.


 DOI: 10.21437/Interspeech.2019-2599

Cite as: Belinkov, Y., Ali, A., Glass, J. (2019) Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition. Proc. Interspeech 2019, 81-85, DOI: 10.21437/Interspeech.2019-2599.


@inproceedings{Belinkov2019,
  author={Yonatan Belinkov and Ahmed Ali and James Glass},
  title={{Analyzing Phonetic and Graphemic Representations in End-to-End Automatic Speech Recognition}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={81--85},
  doi={10.21437/Interspeech.2019-2599},
  url={http://dx.doi.org/10.21437/Interspeech.2019-2599}
}