Automatic reading assessment software has the difficult task of trying to model human-based observations, which have both objective and subjective components. In this paper, we mimic the grading patterns of a ground-truth (average) evaluator in order to produce models that agree with many peoples judgments. We examine one particular reading task, where children read a list of words aloud, and evaluators rate the childrens overall reading ability on a scale from one to seven. We first extract various features correlated with the specific cues that evaluators said they used. We then compare various supervised learning methods that mapped the most relevant features to the ground-truth evaluator scores. Our final system predicted these scores with 0.91 correlation, higher than the average inter-evaluator agreement.
Cite as: Black, M., Tepperman, J., Lee, S., Narayanan, S.S. (2009) Predicting children's reading ability using evaluator-informed features. Proc. Interspeech 2009, 1895-1898, doi: 10.21437/Interspeech.2009-549
@inproceedings{black09_interspeech, author={Matthew Black and Joseph Tepperman and Sungbok Lee and Shrikanth S. Narayanan}, title={{Predicting children's reading ability using evaluator-informed features}}, year=2009, booktitle={Proc. Interspeech 2009}, pages={1895--1898}, doi={10.21437/Interspeech.2009-549} }