Framework for Conducting Tasks Requiring Human Assessment

Martin Grůber, Adam Chýlek, Jindřich Matoušek


This paper presents a web-based framework that improves and simplifies the design and the deployment of tasks that require human input. These tasks may include speech, text or image transcription, annotation and evaluation. The focus is on listening tests for the purpose of a speech synthesis quality assessment. The framework is quite flexible, i.e. many different types of tasks can be prepared and presented to participants. The participants can then work on the tasks via a user-friendly GUI and their responses are recorded in a database. The framework is ready to be integrated as an external task for Amazon Mechanical Turk but it can also be used as a stand-alone platform.


Cite as: Grůber, M., Chýlek, A., Matoušek, J. (2019) Framework for Conducting Tasks Requiring Human Assessment. Proc. Interspeech 2019, 4626-4627.


@inproceedings{Grůber2019,
  author={Martin Grůber and Adam Chýlek and Jindřich Matoušek},
  title={{Framework for Conducting Tasks Requiring Human Assessment}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={4626--4627}
}