Automatic Evaluation of Children Reading Aloud on Sentences and Pseudowords

Jorge Proença, Carla Lopes, Michael Tjalve, Andreas Stolcke, Sara Candeias, Fernando Perdigão


Reading aloud performance in children is typically assessed by teachers on an individual basis, manually marking reading time and incorrectly read words. A computational tool that assists with recording reading tasks, automatically analyzing them and providing performance metrics could be a significant help. Towards that goal, this work presents an approach to automatically predicting the overall reading aloud ability of primary school children (6–10 years old), based on the reading of sentences and pseudowords. The opinions of primary school teachers were gathered as ground truth of performance, who provided 0–5 scores closely related to the expectations at the end of each grade. To predict these scores automatically, features based on reading speed and number of disfluencies were extracted, after an automatic disfluency detection. Various regression models were trained, with Gaussian process regression giving best results for automatic features. Feature selection from both sentence and pseudoword reading tasks gave the closest predictions, with a correlation of 0.944. Compared to the use of manual annotation with the best correlation being 0.952, automatic annotation was only 0.8% worse. Furthermore, the error rate of predicted scores relative to ground truth was found to be smaller than the deviation of evaluators’ opinion per child.


 DOI: 10.21437/Interspeech.2017-1541

Cite as: Proença, J., Lopes, C., Tjalve, M., Stolcke, A., Candeias, S., Perdigão, F. (2017) Automatic Evaluation of Children Reading Aloud on Sentences and Pseudowords. Proc. Interspeech 2017, 2749-2753, DOI: 10.21437/Interspeech.2017-1541.


@inproceedings{Proença2017,
  author={Jorge Proença and Carla Lopes and Michael Tjalve and Andreas Stolcke and Sara Candeias and Fernando Perdigão},
  title={Automatic Evaluation of Children Reading Aloud on Sentences and Pseudowords},
  year=2017,
  booktitle={Proc. Interspeech 2017},
  pages={2749--2753},
  doi={10.21437/Interspeech.2017-1541},
  url={http://dx.doi.org/10.21437/Interspeech.2017-1541}
}