INTERSPEECH 2013
14thAnnual Conference of the International Speech Communication Association

Lyon, France
August 25-29, 2013

Motivational Feedback in Crowdsourcing: A Case Study in Speech Transcription

G. Riccardi, A. Ghosh, S. A. Chowdhury, Ali Orkan Bayer

Università di Trento, Italy

A widely used strategy in human and machine performance enhancement is achieved through feedback. In this paper we investigate the effect of live motivational feedback on motivating crowds and improving the performance of the crowdsourcing computational model. The provided feedback allows workers to react in real-time and review past actions (e.g. word deletions); thus, to improve their performance on the current and future (sub) tasks. The feedback signal can be controlled via clean (e.g. expert) supervision or noisy supervision in order to trade-off between cost and target performance of the crowd-sourced task. The feedback signal is designed to enable crowd workers to improve their performance at the (sub) task level. The type and performance of feedback signal is evaluated in the context of a speech transcription task. Amazon Mechanical Turk (AMT) platform is used to transcribe speech utterances from different corpora. We show that in both clean (expert) and noisy (worker/turker) real-time feedback conditions the crowd workers are able to provide significantly more accurate transcriptions in a shorter time.

Full Paper

Bibliographic reference.  Riccardi, G. / Ghosh, A. / Chowdhury, S. A. / Bayer, Ali Orkan (2013): "Motivational feedback in crowdsourcing: a case study in speech transcription", In INTERSPEECH-2013, 1111-1115.