16th Annual Conference of the International Speech Communication Association

Dresden, Germany
September 6-10, 2015

Transcribing Continuous Speech Using Mismatched Crowdsourcing

Preethi Jyothi, Mark Hasegawa-Johnson

University of Illinois at Urbana-Champaign, USA

Mismatched crowdsourcing derives speech transcriptions using crowd workers unfamiliar with the language being spoken. This approach has been demonstrated for isolated word transcription tasks, but never yet for continuous speech. In this work, we demonstrate mismatched crowdsourcing of continuous speech with a word error rate of under 45% in a large-vocabulary transcription task of short speech segments. In order to scale mismatched crowdsourcing to continuous speech, we propose a number of new WFST pruning techniques based on explicitly low-entropy models of the acoustic similarities among orthographic symbols as understood within a transcriber community. We also provide an information-theoretic analysis and estimate the amount of information lost in transcription by the mismatched crowd workers to be under 5 bits.

Full Paper

Bibliographic reference.  Jyothi, Preethi / Hasegawa-Johnson, Mark (2015): "Transcribing continuous speech using mismatched crowdsourcing", In INTERSPEECH-2015, 2774-2778.