In this contribution, we combine the advantages of traditional crowdsourcing with contemporary machine learning algorithms with the aim of ultimately obtaining reliable training data for audio processing in a faster, cheaper and therefore more efficient manner than has been previously possible. We propose a novel crowdsourcing approach, which brings a simulated active learning annotation scenario into a real world environment creating an intelligent and gamified crowdsourcing platform for manual audio annotation. Our platform combines two active learning query strategies with an internally calculated trustability score to efficiently reduce manual labelling efforts. This reduction is achieved in a twofold manner: first our system automatically decides if an instance requires annotation; second, it dynamically decides, depending on the quality of previously gathered annotations, on exactly how many annotations are needed to reliably label an instance. Results presented indicate that our approach drastically reduces the annotation load and is considerably more efficient than conventional methods.
Cite as: Hantke, S., Zhang, Z., Schuller, B. (2017) Towards Intelligent Crowdsourcing for Audio Data Annotation: Integrating Active Learning in the Real World. Proc. Interspeech 2017, 3951-3955, doi: 10.21437/Interspeech.2017-406
@inproceedings{hantke17b_interspeech, author={Simone Hantke and Zixing Zhang and Björn Schuller}, title={{Towards Intelligent Crowdsourcing for Audio Data Annotation: Integrating Active Learning in the Real World}}, year=2017, booktitle={Proc. Interspeech 2017}, pages={3951--3955}, doi={10.21437/Interspeech.2017-406} }