ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

Residual Energy-Based Models for End-to-End Speech Recognition

Qiujia Li, Yu Zhang, Bo Li, Liangliang Cao, Philip C. Woodland

End-to-end models with auto-regressive decoders have shown impressive results for automatic speech recognition (ASR). These models formulate the sequence-level probability as a product of the conditional probabilities of all individual tokens given their histories. However, the performance of locally normalised models can be sub-optimal because of factors such as exposure bias. Consequently, the model distribution differs from the underlying data distribution. In this paper, the residual energy-based model (R-EBM) is proposed to complement the auto-regressive ASR model to close the gap between the two distributions. Meanwhile, R-EBMs can also be regarded as utterance-level confidence estimators, which may benefit many downstream tasks. Experiments on a 100hr LibriSpeech dataset show that R-EBMs can reduce the word error rates (WERs) by 8.2%/6.7% while improving areas under precision-recall curves of confidence scores by 12.6%/28.4% on test-clean/test-other sets. Furthermore, on a state-of-the-art model using self-supervised learning (wav2vec 2.0), R-EBMs still significantly improves both the WER and confidence estimation performance.


doi: 10.21437/Interspeech.2021-690

Cite as: Li, Q., Zhang, Y., Li, B., Cao, L., Woodland, P.C. (2021) Residual Energy-Based Models for End-to-End Speech Recognition. Proc. Interspeech 2021, 4069-4073, doi: 10.21437/Interspeech.2021-690

@inproceedings{li21m_interspeech,
  author={Qiujia Li and Yu Zhang and Bo Li and Liangliang Cao and Philip C. Woodland},
  title={{Residual Energy-Based Models for End-to-End Speech Recognition}},
  year=2021,
  booktitle={Proc. Interspeech 2021},
  pages={4069--4073},
  doi={10.21437/Interspeech.2021-690}
}