ISCA Archive Interspeech 2021
ISCA Archive Interspeech 2021

BERT-Based Semantic Model for Rescoring N-Best Speech Recognition List

Dominique Fohr, Irina Illina

This work aims to improve automatic speech recognition (ASR) by modeling long-term semantic relations. We propose to perform this through rescoring the ASR N-best hypotheses list. To achieve this, we propose two deep neural network (DNN) models and combine semantic, acoustic, and linguistic information. Our DNN rescoring models are aimed at selecting hypotheses that have better semantic consistency and therefore lower WER. We investigate a powerful representation as part of input features to our DNN model: dynamic contextual embeddings from Transformer-based BERT. Acoustic and linguistic features are also included. We perform experiments on the publicly available dataset TED-LIUM. We evaluate in clean and in noisy conditions, with n-gram and Recurrent Neural Network Language Model (RNNLM), more precisely Long Short-Term Memory (LSTM) model. The proposed rescoring approaches give significant WER improvements over the ASR system without rescoring models. Furthermore, the combination of rescoring methods based on BERT and GPT-2 scores achieves the best results.

doi: 10.21437/Interspeech.2021-313

Cite as: Fohr, D., Illina, I. (2021) BERT-Based Semantic Model for Rescoring N-Best Speech Recognition List. Proc. Interspeech 2021, 1867-1871, doi: 10.21437/Interspeech.2021-313

  author={Dominique Fohr and Irina Illina},
  title={{BERT-Based Semantic Model for Rescoring N-Best Speech Recognition List}},
  booktitle={Proc. Interspeech 2021},