Speech and Language Technology in Education (SLaTE 2013)

Grenoble, France
August 30-September 1, 2013

Methodological Issues in Evaluating a Spoken CALL Game: Can Crowdsourcing Help Us Perform Controlled Experiments?

Manny Rayner, Nikos Tsourakis

University of Geneva, Switzerland

We summarise a series of experiments we have carried out over the last three years on CALL-SLT, a speech-enabled web-based CALL game for learning and improving fluency in domain language, focussing on the methodological aspects. In particular, we argue that our previous evaluations have been systematically flawed due to the lack of a control group. We present a detailed description of our most recent evaluation, where 130 subjects, recruited using crowdsourcing methods, followed a short course in basic French over a period of one week, with 24 subjects completing the course. About a third of the subjects (half of the ones that finished) were assigned to a control group who used a version of the system with speech recognition feedback disabled; subjects in both groups demonstrated significant improvements in language skills over the duration of the experiment, but the improvements were significantly larger for the non-control subjects. We argue in conclusion that this type of experiment opens up interesting new ways to attack the difficult problem of performing controlled experiments with CALL applications.

Index Terms: CALL, speech recognition, evaluation, methodology, crowdsourcing

Full Paper

Bibliographic reference.  Rayner, Manny / Tsourakis, Nikos (2013): "Methodological issues in evaluating a spoken CALL game: can crowdsourcing help us perform controlled experiments?", In SLaTE-2013, 77-82.