ISCA Archive Interspeech 2015
ISCA Archive Interspeech 2015

Transcribing continuous speech using mismatched crowdsourcing

Preethi Jyothi, Mark Hasegawa-Johnson

Mismatched crowdsourcing derives speech transcriptions using crowd workers unfamiliar with the language being spoken. This approach has been demonstrated for isolated word transcription tasks, but never yet for continuous speech. In this work, we demonstrate mismatched crowdsourcing of continuous speech with a word error rate of under 45% in a large-vocabulary transcription task of short speech segments. In order to scale mismatched crowdsourcing to continuous speech, we propose a number of new WFST pruning techniques based on explicitly low-entropy models of the acoustic similarities among orthographic symbols as understood within a transcriber community. We also provide an information-theoretic analysis and estimate the amount of information lost in transcription by the mismatched crowd workers to be under 5 bits.

doi: 10.21437/Interspeech.2015-584

Cite as: Jyothi, P., Hasegawa-Johnson, M. (2015) Transcribing continuous speech using mismatched crowdsourcing. Proc. Interspeech 2015, 2774-2778, doi: 10.21437/Interspeech.2015-584

  author={Preethi Jyothi and Mark Hasegawa-Johnson},
  title={{Transcribing continuous speech using mismatched crowdsourcing}},
  booktitle={Proc. Interspeech 2015},