Ordinate developed an automatic assessment of oral reading fluency that was administered to a large sample of American adults. Because fluent reading entails accuracy, the machine evaluations of oral reading accuracy were assessed. This paper reviews the methods and results of a study to assess accuracy and bias within a large-scale automatic assessment of oral reading fluency. An experiment compared machine scores with human ratings to measure accuracy and detect any bias for linguistic/ethnic groups. The individual data products of the machine scores are described and the validation experiment is presented. The machine scores were substantially identical to the human ratings.
Cite as: Balogh, J., Bernstein, J., Cheng, J., Townshend, B. (2007) Automatic evaluation of reading accuracy: assessing machine scores. Proc. Speech and Language Technology in Education (SLaTE 2007), 112-115
@inproceedings{balogh07_slate, author={Jennifer Balogh and Jared Bernstein and Jian Cheng and Brent Townshend}, title={{Automatic evaluation of reading accuracy: assessing machine scores}}, year=2007, booktitle={Proc. Speech and Language Technology in Education (SLaTE 2007)}, pages={112--115} }