Challenging the Boundaries of Speech Recognition: The MALACH Corpus

Michael Picheny, Zoltán Tüske, Brian Kingsbury, Kartik Audhkhasi, Xiaodong Cui, George Saon


There has been huge progress in speech recognition over the last several years. Tasks once thought extremely difficult, such as SWITCHBOARD, now approach levels of human performance. The MALACH corpus (LDC catalog LDC2012S05), a 375-Hour subset of a large archive of Holocaust testimonies collected by the Survivors of the Shoah Visual History Foundation, presents significant challenges to the speech community. The collection consists of unconstrained, natural speech filled with disfluencies, heavy accents, age-related coarticulations, un-cued speaker and language switching, and emotional speech - all still open problems for speech recognition systems. Transcription is challenging even for skilled human annotators. This paper proposes that the community place focus on the MALACH corpus to develop speech recognition systems that are more robust with respect to accents, disfluencies and emotional speech. To reduce the barrier for entry, a lexicon and training and testing setups have been created and baseline results using current deep learning technologies are presented. The metadata has just been released by LDC (LDC2019S11). It is hoped that this resource will enable the community to build on top of these baselines so that the extremely important information in these and related oral histories becomes accessible to a wider audience.


 DOI: 10.21437/Interspeech.2019-1907

Cite as: Picheny, M., Tüske, Z., Kingsbury, B., Audhkhasi, K., Cui, X., Saon, G. (2019) Challenging the Boundaries of Speech Recognition: The MALACH Corpus. Proc. Interspeech 2019, 326-330, DOI: 10.21437/Interspeech.2019-1907.


@inproceedings{Picheny2019,
  author={Michael Picheny and Zoltán Tüske and Brian Kingsbury and Kartik Audhkhasi and Xiaodong Cui and George Saon},
  title={{Challenging the Boundaries of Speech Recognition: The MALACH Corpus}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={326--330},
  doi={10.21437/Interspeech.2019-1907},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1907}
}