Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions

Rachael Tatman, Conner Kasten


This project compares the accuracy of two automatic speech recognition (ASR) systems — Bing Speech and YouTube’s automatic captions — across gender, race and four dialects of American English. The dialects included were chosen for their acoustic dissimilarity. Bing Speech had differences in word error rate (WER) between dialects and ethnicities, but they were not statistically reliable. YouTube’s automatic captions, however, did have statistically different WERs between dialects and races. The lowest average error rates were for General American and white talkers, respectively. Neither system had a reliably different WER between genders, which had been previously reported for YouTube’s automatic captions [1]. However, the higher error rate non-white talkers is worrying, as it may reduce the utility of these systems for talkers of color.


 DOI: 10.21437/Interspeech.2017-1746

Cite as: Tatman, R., Kasten, C. (2017) Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions. Proc. Interspeech 2017, 934-938, DOI: 10.21437/Interspeech.2017-1746.


@inproceedings{Tatman2017,
  author={Rachael Tatman and Conner Kasten},
  title={Effects of Talker Dialect, Gender & Race on Accuracy of Bing Speech and YouTube Automatic Captions},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={934--938},
  doi={10.21437/Interspeech.2017-1746},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1746}
}