INTERSPEECH 2015
16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

How to Evaluate ASR Output for Named Entity Recognition?

Mohamed Ameur Ben Jannet (1), Olivier Galibert (1), Martine Adda-Decker (2), Sophie Rosset (3)

(1) LNE, France
(2) LPP (UMR 7018), France
(3) LIMSI, France

The standard metric to evaluate automatic speech recognition (ASR) systems is the word error rate (WER). WER has proven very useful in stand-alone ASR systems. Nowadays, these systems are often embedded in complex natural language processing systems to perform tasks like speech translation, man-machine dialogue, or information retrieval from speech. This exacerbates the need for the speech processing community to design a new evaluation metric to estimate the quality of automatic transcriptions within their larger applicative context.
    We introduce a new measure to evaluate ASR in the context of named entity recognition, which makes use of a probabilistic model to estimate the risk of ASR errors inducing downstream errors in named entity detection. Our evaluation, on the ETAPE data, shows that ATENE achieves a higher correlation than WER between the performances in named entities recognition and in automatic speech transcription.

Full Paper

Bibliographic reference.  Jannet, Mohamed Ameur Ben / Galibert, Olivier / Adda-Decker, Martine / Rosset, Sophie (2015): "How to evaluate ASR output for named entity recognition?", In INTERSPEECH-2015, 1289-1293.