Personalizing ASR for Dysarthric and Accented Speech with Limited Data

Joel Shor, Dotan Emanuel, Oran Lang, Omry Tuval, Michael Brenner, Julie Cattiau, Fernando Vieira, Maeve McNally, Taylor Charbonneau, Melissa Nollstadt, Avinatan Hassidim, Yossi Matias


Automatic speech recognition (ASR) systems have dramatically improved over the last few years. ASR systems are most often trained from ‘typical’ speech, which means that underrepresented groups don’t experience the same level of improvement. In this paper, we present and evaluate finetuning techniques to improve ASR for users with non-standard speech. We focus on two types of non-standard speech: speech from people with amyotrophic lateral sclerosis (ALS) and accented speech. We train personalized models that achieve 62% and 35% relative WER improvement on these two groups, bringing the absolute WER for ALS speakers, on a test set of message bank phrases, down to 10% for mild dysarthria and 20% for more serious dysarthria. We show that 71% of the improvement comes from only 5 minutes of training data. Finetuning a particular subset of layers (with many fewer parameters) often gives better results than finetuning the entire model. This is the first step towards building state of the art ASR models for dysarthric speech.


 DOI: 10.21437/Interspeech.2019-1427

Cite as: Shor, J., Emanuel, D., Lang, O., Tuval, O., Brenner, M., Cattiau, J., Vieira, F., McNally, M., Charbonneau, T., Nollstadt, M., Hassidim, A., Matias, Y. (2019) Personalizing ASR for Dysarthric and Accented Speech with Limited Data. Proc. Interspeech 2019, 784-788, DOI: 10.21437/Interspeech.2019-1427.


@inproceedings{Shor2019,
  author={Joel Shor and Dotan Emanuel and Oran Lang and Omry Tuval and Michael Brenner and Julie Cattiau and Fernando Vieira and Maeve McNally and Taylor Charbonneau and Melissa Nollstadt and Avinatan Hassidim and Yossi Matias},
  title={{Personalizing ASR for Dysarthric and Accented Speech with Limited Data}},
  year=2019,
  booktitle={Proc. Interspeech 2019},
  pages={784--788},
  doi={10.21437/Interspeech.2019-1427},
  url={http://dx.doi.org/10.21437/Interspeech.2019-1427}
}