INTERSPEECH 2007
8th Annual Conference of the International Speech Communication Association

Antwerp, Belgium
August 27-31, 2007

Automatic Large-Scale Oral Language Proficiency Assessment

Febe de Wet, Christa van der Walt, Thomas Niesler

Stellenbosch University, South Africa

We describe first results obtained during the development of an automatic system for the assessment of spoken English proficiency of university students. The ultimate aim of this system is to allow fast, consistent and objective assessment of oral proficiency for the purpose of placing students in courses appropriate to their language skills. Rate of speech (ROS) was chosen as an indicator of fluency for a number of oral language exercises. In a test involving 106 student subjects, the assessments of 5 human raters are compared with evaluations based on automatically-derived ROS scores. It is found that, although the ROS is estimated accurately, the correlation between human assessments and the ROS scores varies between 0.5 and 0.6. However, the results also indicate that only two of the five human raters were consistent in their appraisals, and that there was only mild inter-rater agreement.

Full Paper

Bibliographic reference.  Wet, Febe de / Walt, Christa van der / Niesler, Thomas (2007): "Automatic large-scale oral language proficiency assessment", In INTERSPEECH-2007, 218-221.